On 10th October the LIBE Committee of the European Parliament met with Commissioner Ylva Johansson to discuss the Commission’s proposals for a new strategy to combat child sexual abuse.
Much of the focus at the meeting was on three particular types of child protection tools currently lawfully deployed in the EU under a temporary derogation and lawfully deployed on a permanent basis in many other liberal democratic jurisdictions.
The first of these tools identifies already known illegal child sexual abuse material (csam), reports it and gets it deleted. It does so with an expected error rate of around 1 in 50 billion. The second can identify and flag for human review material which is likely to be csam. The third can identify and flag for human review patterns of behaviour suggesting a child may be being groomed for sexual purposes.
An approved list of technologies
In future, under the EU’s proposals the tools which may be deployed to perform these tasks will come from a list published by a new European Centre. The centre will approve the integrity of the tools. This is a very important development. At the moment it’s a free-for-all. No checks. No independent third-party guarantees.
Same old same old
At the meeting with Commissioner Johansson several of the points raised by a handful of MEPs represented exactly the kind of problem I mentioned in my last blog. Policy-making in relation to the internet continues to be bedevilled by a historic legacy of mistrust and suspicion about the motives and intentions of all the key actors.
How else can we explain, for example, the assertion that one or more, maybe all of the child protection tools mentioned represent a form of “mass surveillance”. Or they represent an infringement of people’s right to privacy. Neither is true.
Luggage and body scanners, and sniffing dogs
Think about luggage or body scanners at an airport or at the entrance to a building, or a police dog that sniffs for drugs.
These tools simply and only look for known and specific signs of criminal behaviour. They are not engaged in a fishing expedition or intelligently looking at, digesting or storing content.
If a metaphorical or actual light flashes and someone asks you to open your bag or suitcase, empty your pockets, but it turns out there was nothing improper there, if it turns out the known and specific signs which triggered the flashing light had produced a false alarm, the matter ends there.
No harm. No foul. No notes go on anybody’s file. There is no finger-pointing or public disgrace. And if you doubt that any notes or records are being made, the answer is to ensure transparency. “Sunlight is the best disinfectant” (thank you US Supreme Court Justice Brandeis).
Nobody’s right to privacy or confidentiality has been infringed because, on any reasonable reading, nothing of any material significance actually happened.
While on the subject of the US Supreme Court, here are a couple of decisions which are relevant to this discussion. In the case of United States v Place, the Supreme Court held that the use of sniffer dogs at airports does not constitute an unlawful search. And at least one legal scholar thought it highly likely a future Supreme Court would “analogize” and adopt the same reasoning if a case were to come before it concerning child sexual abuse on the internet. And in Illinois v Caballes the Court decided there was no “legitimate privacy interest in possessing contraband”. Quite so and that must be even more the case if no significant privacy interest was infringed in the first place.
Acting on the basis of individual suspicion
It may be impossible to calculate this with any degree of certainty, but I am reliably informed the vast majority of information which law enforcement agencies receive about alleged wrong-doing or alleged wrong-doers turns out to be useless. But they have to check out, if not all of it (some may be very obviously off-the-wall) then at least a good deal of it. However, as you would expect, they only act on the basis of “individual suspicion” i.e. on the basis of concrete intelligence which points them towards a particular person or persons or, if relevant, a particular organization or set of organizations. Companies are in a not dissimilar position when they receive complaints about another user’s behaviour. They look before they leap. Or they should.
Every company forbids their network from being used for criminal purposes. The problem is, hitherto, they have lacked the means, and some have also lacked the will, to ensure this prohibition means anything. We should therefore think of the three child protection tools referred to above as being, at one level, no more than a way of helping businesses enforce their own Ts&Cs.
I take it no company would wish to advertise themselves in the following terms
Come and join us. You can ignore our Ts&Cs. We will try hard never to enforce them too efficiently. If you want to commit crimes online we’re your guys. Welcome
But should companies be doing this kind of thing at all?
One of the arguments you hear against the use of the tools outlined is
these are matters for the police or other state authorities, companies shouldn’t be doing it
Think about that for a minute. Who actually runs the consumer-facing networks on which the Apps, platforms or tools operate? Surely it is not being suggested the police or other state agency should sit on or in the networks or inside the developers’ offices to monitor activity? Or is it? And even if that was the case, wouldn’t we still have to draw up rules governing how the public officials could or should function in such a role?
There have been porous boundaries between private and public actors for many years. It is no longer possible to draw neat lines which divide the one from the other. In the field of communications technologies, as long as companies operate within publicly stated guidelines, and their performance can be monitored or scrutinized through transparency mechanisms, use of the child protection tools represents a practical and realistic way in which children can be protected in an online environment. Technical tools of this kind are certainly not all that is needed but they are definitely part of the answer.
Overwhelming support from the public
We all accept the situation at airports and public buildings because we understand and accept the underlying social purpose of the precautionary measures. They benefit us all. The same is unquestionably true in respect of the protection of children on the internet. Within the EU there is overwhelming public support for the deployment of the tools. A great many people believe their use should be mandatory.
The importance of messaging apps
It is hard to think of a single internet technology which has not been used to distribute or publish csam or groom children. Web sites, social media, chat in all its guises, games, FTP, email, Usenet Newsgroups. They have all featured and continue so to do. However, messaging environments have acquired a particular importance. This is because we have reliable data from Meta. The data show the gigantic volume of criminal misuse of their messaging Apps in ways which harm children.
As their principal messaging App is encrypted end-to-end, Apple acknowledged they have no idea how much csam might be being exchanged using their network. To Apple’s great credit they seem intent on doing something about this unfortunate state of affairs. One can only imagine they have been encouraged to do so because, knowing the scale on which it is happening on Meta’s messaging Apps they appreciate it must also be happening on theirs. If anything, because theirs is already encrypted, it is likely to be worse.
The problem is what to do about currently encrypted environments or about environments which may be encrypted in the future. Apple came up with an elegant solution. Client-side scanning Even Instagram appears to be experimenting with it.
Nobody I work with wants to break or compromise strong encryption. The whole idea of back doors or hidden routes is repugnant. Client side scanning honours encryption because it looks only for known and specific signs of criminal behaviour or content before the message is encrypted.
To argue against this position is essentially to argue tech has a right to decide to put criminal action on an enormous scale, I repeat, on an enormous scale, outwith the reach of the state. Who voted for that?
Strong encryption has been promoted into a space for which it was never intended and its use, like several other bits of digital technology has led to unintended, unforeseen and unwanted consequences. Rather than roll over and say nothing can be done we say “no, something can be done.” Use these tools.
Driving it underground or into dark corners
One of the several spurious, desperate, arguments you hear about why we should not make this or that platform or App safer for children is, if you do, for example if the onboarding process is made too laborious or time-consuming (meaning it might take three minutes instead of ninety seconds or whatever) then you will drive people underground or into darker corners of the internet where it will become more difficult to police or check what is going on. Hmm.
So we are being asked to accept a higher degree of danger to children in the places where large numbers of children go, against the possibility a smaller number of children who otherwise might have been attracted to the safer place won’t be. Instead, they will go somewhere else less safe. That doesn’t add up. It is completely unworldly. Knowingly creating or allowing danger to encourage safety? Twisted logic indeed.
Surely the only proper course of action is to do everything that is reasonable to make as many environments as possible as safe as possible for children then we can turn our attention to the less fastidious ones? For that we will need Governments and legislatures to act in just the way so many of them now appear to be intent on doing.
Totalitarian regimes
In the liberal democracies where the rule of law is routinely respected, and institutions operate within a broad framework of respect for human rights, we have or could devise institutions which can curb any possible tendency to allow tech to be abused by those in authority or by the companies themselves. But what if we do something – use a tool, App or programme – which a totalitarian regime can use for evil purposes? Maybe it just needs a small technical tweak, new criteria to be inserted. Instead of csam the algorithms are directed to find stuff that pokes fun at the Supreme Leader.
What this amounts to is saying “OK you could protect your children by doing x in your own country, but you mustn’t because if you do the Supreme Leader in country y will use the same thing and claim he is only doing what you’re doing whereas in fact he isn’t but there are no mechanisms in his country to prevent that.” Alternatively it is an argument which says you should never use or invent anything which can be misused. Absolutely.
I have met the Supreme Leader. Believe me, he was doing bad stuff to his people long before the internet came along and he does not sit by the phone waiting for permission from a Government in a liberal democracy before doing more bad stuff to them.
Of course we should do whatever we can to help people in the Supreme Leader’s benighted land to overthrow the guy at the helm or make it harder for him to oppress them but it is ridiculous to suggest we can only do this by making some children e.g. our own, more vulnerable to risk than they need be. If the internet ever was simply a vehicle for overthrowing totalitarian regimes it long since ceased to be and children’s safety should not now be sacrificed in a doomed attempt to revive that Utopian model.
The internet is a global entity
Closely allied to that last argument is the one about the importance of securing global agreements “because the internet is a global entity”. Global agreements can be important. General Comment 25 on the UNCRC is a classic example. But at a time when a permanent member of the UN Security Council is waging an aggressive war on a neighbouring state it seems absurd to ask us to put all our faith in global institutions. But anyway, even if Russia suddenly became Sweden, it is a nonsense. Nobody lives globally. For practical purposes everyone’s feet touch ground which is in a national jurisdiction.
Typically, companies will obey the law of the country in which their Head Office is domiciled. They will also obey the law in a jurisdiction in which they operate. If there are conflicts of laws there are mechanisms and known ways of resolving them. Some companies choose not to offer their services in particular jurisdictions precisely because they do not want to be bound by the local rules. That is their right. Small countries with less valuable markets may struggle to get the attention of a big global player, but that raises different issues.
The underlying technology of the internet represents a global system, at least it has done up until now. But in the main, not always, what we are concerned about when we discuss children’s safety, is not the infrastructure of the internet, it is the application layer. The front end. The bit Meta, Apple, Tik Tok and others use to connect with us.
Companies would love a single set of rules which they could apply identically at the front end in every jurisdiction. This would minimise compliance costs and generally make the administration and management of the site or service much easier. That is not a good enough reason to accept a higher level of risk to children in the place we happen to live.
If we can win progressive and effective change in one nation state we should go for it. It can act as a model for others to follow. I don’t doubt the EU is going to set global standards, in the same way the Council of Europe also does. What we certainly should never do is decline to win or even try for progressive change in our own country or in our region or economic bloc against a vague hope that we need everyone or lots of countries to move forward simultaneously. Cui bono? We have delayed long enough.