An age aware internet

Some of my recent blogs have referred to an argument going on about online age verification. Almost nobody is now arguing that, in principle, online age verification or age assurance (av/aa) is a “bad thing”. That in itself is a huge step forward compared to where we were not so long ago.

What is being contested, however, is the circumstances in which av/aa should be carried out, who should have the responsibility and therefore the legal liability for carrying it out, and how and where it ought to be carried out.

I’m going to look at this in two separate blogs, this being the first. Obviously.

When should av/aa be done?

This is the easiest question. Now self-regulation is all but dead the answer is

“at least where the law requires it”

In lots of jurisdictions, for some time specific products and services have been governed by legally binding av/aa rules e.g. in respect of online gambling and the purchase of alcohol and tobacco. In the latter cases, at least in the UK, many firms also require proof of age at the point of physical delivery. This is not a legal requirement but it may become one. In Tony Allen’s book “Age Restricted Sales”, he tells us there are over 200 items where a legally-based age-restriction is in place.

Ofcom publishes its codes today

This blog is being published on the day the UK’s internet regulator, Ofcom, publishes its long-awaited codes for Protecting children from harms online”. The provisions start to be enforceable from 25th July and are in addition to the “Illegal content codes of practice” which came into effect last month.

The harms online codes apply to businesses which provide search services and user-to-user services likely to be accessed by children (even if children are not a target audience).

There is a whole section on the necessity of using “highly effective” age assurance to restrict children’s access to age inappropriate harmful content. Gaming and other interactive environments are also caught by these and other rules. Hefty fines can be applied if a business fails to comply. Ultimately a business can be barred from operating within the UK if it refuses to comply.

If a site or service carries out a risk assessment and concludes there is zero risk to any child from anything that might be shown or happen on or through the service or products they supply, then it goes without saying they need not put anything in place to do av/aa.

The dominant form of av/aa

The current dominant form of av/aa typically involves the owner of a web site, App or service being responsible for ensuring an av/aa process is completed at the point of first use by an end-user (you or me).

The av/aaprocess therefore takes place and is explained in accessible language close to the moment someone first indicates an intention to enter a space which is age sensitive in some way. Most social media sites in the UK stipulate 13 as the minimum age for opening an account and getting a profile without parental consent. Elsewhere the ages typically range from 13 to 16.

A lot of technical innovation has been going on. A variety of methods have been developed to perform the verification or assurance act.

The available systems can achieve an exceptionally high level of accuracy: pretty much 100% for 18s and above and extremely close to that in lower age bands. There are usually routes to appeal a decision about a person’s age if they disagree with it.

Some av/aa systems can include a “liveness” test to minimise or eliminate the risk of impersonation or to detect someone using a mask or makeup to try to disguise their true age.

Absent a legally determined specific age limit e.g. 18, the degree of precision or certainty required about someone’s age will flow from and be determined by the level of risk of harm associated with the environment in question.

For example, some classes of content might be deemed suitable only for persons between the ages of, say, 13 and 16. Others might want to limit entry to certain spaces to people below or between certain ages. If there is any reasonable doubt about whether an individual fits a specified age criterion, secondary systems can be invoked to resolve the matter and, as mentioned already, appeals systems exist if the person and the tech disagree.

Many of the av/aa systems retain no data about the people they verify. It’s a closed loop. A digital token can be issued which proves an av/aa process was completed for the account in question and what the result was i.e. what age the person was found to be. But that’s it. The token is not linked to anything which could enable anyone to determine the physical world identity, address, patterns of consumption or whatever of the individual concerned.

Once completed, the technology exists which, in principle, obviates the need for the av/aa process to be repeated on any subsequent visits to the same site or service save where there is a legal requirement to do otherwise.

Systems are evolving to allow an av/aa process to be completed once without the need to repeat the process when they visit other sites or services which participate in an interoperable scheme. However, some av/aa solutions can require periodic reconfirmation of the age of the user.

Happening at scale and very successfully

Using a variety of methods (see above) the key point is online av/aa is already happening very successfully, at scale in many parts of the world for a wide range of products, services and content.

The entity providing the age-related product, service or content will not normally do the av/aa itself. In some jurisdictions that is, or will be, expressly forbidden. It is one of the ways in which the public is reassured that their real world identity, address or patterns of consumption, cannot be known or inferred from any data emanating from or through the av/aa process.

The supplier of the age restricted product or service will therefore normally have a contract with a specialist av/aa provider who will send back a signal or token indicating the person’s age, or the age range within which they fall. It will usually come in the form of a “yes” or a “no”. Nothing else.

Once engaged with a site or service, if people choose to buy or pay for something, that can change things, depending on the method of payment used. But that has nothing to do with and is entirely separate from the necessary, prior av/aa process.

A false polarity

A wholly new approach is being promoted. It is being promoted by, among others, Meta, X and SnapPornhub is backing it too. This not the most enticing combination of commercial interests, IMHO, but we must play the ball not the player.

The method they are promoting, for shorthand, is called “on-device”, meaning that is where the av/aa should occur, and I will discuss it in more detail in my next blog. However, it is important to note these businesses all stand to gain financially from the new approach. That important fact is not always given any prominence when they explain why they support it.

Where I fall out with some of the promoters of the on-device approach is where they have been, sometimes rather aggressively, rubbishing the current dominant form of carrying out av/aa. In so doing they are thereby creating an entirely false polarity.

They suggest, for example, the current dominant form of doing av/aa will not work satisfactorily at the kind of scale now envisaged as large platforms begin to engage. This is just not true. And the claims they make for their preferred alternative cannot be proved because the system does not yet exist. It remains entirely theoretical.

I have no problem with the on-device approach. On the contrary, it could play a valuable role in protecting children but, as my next blog will show, it does not cover all use cases.

A blend of the two approaches is likely where we will end up. At the end of the day the only question that matters is

“Does it work to protect children?”

A new phase

As hinted at earlier, the bigger picture, perhaps, is the internet is going through a new phase in its evolution. It is becoming an age-aware internet.

For some time we have known, globally, 1 in 3 of all internet users are children. That has consequences. Belatedly, these are finally being recognised and acted upon.

We might be arguing about the small print of how this age aware internet will look and work, important small print, but still only small print. The headline is settled.

Av/aa, safety by design, security by default, age appropriate design codes….. It’s all adding up to a different and better kind of internet.

I can see light at the end of the tunnel as more and more jurisdictions around the world are getting busy. We know how to do this.

Share