A major problem has arisen in the context of children’s well-being, specifically where end-to-end- encryption (E2EE) has been or is being integrated into mass, online messaging systems which are indiscriminately available, easily accessible, easy to use and are, or seem to be, free at the point of use. We already know one of the most popular messaging services, Apple’s iMessage, works in this way and on its own admission is one of the worst offenders. Two more biggies, Meta’s Facebook Messenger and Instagram are about to be added to the list. If this is allowed to stand more will follow. The herd instinct will kick in.
Recently over 1800 people signed and published a letter suggesting there be a pause in the further development of certain kinds of AI. This is meant to give the world a chance to workout rules so as to avoid what the signatories believe could be catastrophic consequences. I’m thinking maybe we need a similar initiative in respect of E2EE.
In theory, there will remain other ways in which law enforcement can pursue criminals who harm children using E2EE. But for practical purposes, in effect, what E2EE threatens to do, is doing, is create a substantial new set of spaces which are beyond the reach of the courts. In this way the Rule of Law is seriously undermined. More to the point in this context, growing numbers of children will continue to be harmed while the perpetrators are rewarded with impunity. Yet it need not be that way.
In days of old
There was a time when anyone could encrypt anything that was sent over the internet but the tools to do so e.g. PGP, were extremely clunky and slow. Tiny numbers of people or businesses used them. From Day 1 sections of law enforcement didn’t like it, but there were no immediately visible or comprehensible consequences such as might stir up or attract the attention of the public, politicians or the public policy-making world, at least not in a major way.
Those days are long gone, although the manner in which the issue is now being reported in sections of the media is blunting the realisation of what is at stake.
Scale, speed and technical and jurisdictional complexity changes everything
The fact is scale, speed and technical and jurisdictional complexity changes everything. A small fire in a remote section of the forest can be ignored. When it starts to spread you have to act. It’s spreading.
The unintended and unforeseen consequences
It is simply not credible to believe the jurists and politicians responsible for shaping our human rights and privacy laws ever intended their work to threaten the Rule of Law, either in theory or in practice. There is no codicil to the Universal Declaration of Human Rights or any similar document which says
If, in the currently unforeseeable future, technology were to advance to such an extent that private companies or private individuals can limit the ability of courts of law to protect an individual whose rights under this or any future human rights instruments have been violated, then don’t worry. We’re OK with that.
Likewise, if private companies or private individuals want to set themselves up as final arbiters of what our jurispudence really means then ditto. That’s cool. And btw, we hold this truth to be self-evident. It is set in stone forever.
Cui bono?
For the tech companies leading much of the lobbying against measures to address the challenges being thrown up by E2EE, its attractiveness is obvious. You cannot have any legal liablity for stuff you never knew existed because you couldn’t see it. The risk of PR disasters are reduced, the costs of lawyers and adverse judgements are avoided. Even if s230 and similar immunities were abolished or eroded the cloak of invisibility stands ready to protect you. Big Tick.
Sticking with that theme, you cannot be expected to spend money on moderators who are not able to see anything. For Big Tech in social media, the net outcome therefore is money-earning metadata about users’ interactions and transactions remain untouched while overhead costs reduce. What’s not to like? Double Big Tick.
Is it a binary choice? Emphatically not.
Here’s what it boils down to: is it possible to build systems which can detect signs of illegal activity which threaten children and for these to work in and around messaging systems? We already know the answer to that. It is “yes”. It has been going on since about 2009 when Microsoft’s PhotoDNA first emerged. And with no known adverse consequences.
Is it possible to continue using these tools in association with messaging systems and to encase them in rules-based, accountability, auditability and other systems which will satisfy any reasonably minded person that nothing else, nothing unlawful or unauthorised is happening or resulting? This means the content of a message is not being read or seen by any machines or humans, no data are being examined, stored, collected or processed in any way save where, and only where, a specific, reliable signal of probable illegality is flagged? But absent confirmed evidence of illegality there’s an end to it. No records are made so none are kept. Nobody is investigated or named. No harm. No foul.
So the answer to the second question is also or must be “yes”.
It is not a binary choice between protecting children and abandoning a right to privacy.
If people are arguing it is a binary choice, if they say they do not believe it is possible to construct rules-based, accountability, auditability and other systems which will satisfy any reasonably minded person that nothing else, nothing unlawful or unauthorised, is happening or being caused by these systems, what they are really saying is tech is free to do whatever it likes. Tech is free to set its own limits and put itself beyond the control of Governments, even in the liberal democracies which routinely respect the Rule of Law.
I doubt the great British public or the public in any of the liberal democracies are ready to accept such a proposition. And in the particular instance of crimes against children we know with complete certainty a significant majority of the public is on our side. The only question most people ask is “If tools to protect children exist why aren’t they being used? Why doesn’t the law require them to be used?