A problem of trust. Not tech. Part 7. End of series.

The discussion about the proper role of encryption on the internet can be traced back many years, but for most of that time it rumbled along in dark corners and was not connected with the protection of children, at least not in a major way.

That all changed when Meta announced its intention to introduce end-to-end-encryption (E2EE) by default to (currently unencrypted) Facebook Messenger and Instagram Direct.

By a country mile these two Apps were, and remain, the world’s largest sources of discovered online child sex abuse material (csam). We have to acknowledge a degree of unfairness in the amount of attention this focuses on Meta. Other platforms or Apps  undoubtedly could and should be bracketed alongside Facebook,  if not for the same huge volumes nevertheless for large ones. They aren’t but only because no data are available for them, whereas they are for Meta. This doesn’t alter the underlying reality of Meta’s situation but there you go.

Occupying pole position in an execrable league of this sort might have propelled some companies one way. Meta has gone the other. Whatever sympathy people might have had for them evaporates at this point.

Thus, by its proposal to introduce E2EE, at a stroke all, or a very large proportion, of the csam presently being discovered on Facebook properties won’t be. It will still be there. Probably the volume will increase. Why wouldn’t it increase once more people realise they are able to act with the impunity E2EE can in practice confer? 

The continued circulation of a child sex abuse image means it will carry on harming the victim depicted in it. The continued circulation of a child sex abuse image may prompt  further offences against the victim, adding to the harm already done,  as well prompting  offences against children around the world as yet unharmed.  Children in countries with the least developed awareness of these types of risk or the least capacity to address them will likely be in the greatest danger.

The consequences of this wilful blindness are enormous. They cannot be deflected or minimised by references to downstream deficiencies, to overburdened police forces, hotlines or court systems, or by pointing to the inadequacy of social services or the insufficiency of therapeutic support for victims.

This is because that initial report is the first link in a chain. It is a link which will no longer be there. The chain will be broken. Only one entity is responsible for that and it is Meta. Even if all the upstream elements of the chain were improved 1,000% it would be as nought for the victims who are never found, the perpetrators who remain at large or untreated, free to offend again.  Of course we all need to address the systemic upstream  weaknesses, but these do not let Meta off the hook on which they have chosen to impale themselves and children.

The “pivot to privacy”

Meta tries to justify its shift towards E2EE by telling us enhanced privacy is or will be the new norm, the expectation of all internet users. So it’s doing it to preserve its business, but without any substantial acknowledgement of its own singular role in creating that alleged expectation in the first place. Such chutzpah. I am reminded of the guy who shot his parents then, at sentencing, asked the court to take pity on a poor orphan.

If the company uses its considerable promotional power repeatedly to tell everybody E2EE is the new standard it becomes a self-fulfilling prophecy. Meta’s profits will increase accordingly. E2EE almost certainly costs a lot less than employing human moderators.  It also brings other benefits, helping break the link between Meta’s name and csam and reducing  or completely eliminating other potential liabilities.

As things stand Meta’s “pivot to privacy” will therefore be paid for into the indefinite future by children, but not only children. Other vulnerable groups less able to defend themselves will also suffer. If the shift to E2EE is Zuckerberg’s deliberate if unstated atonement for his past privacy sins we might all have wished he had instead gone for sackcloth and ashes then spent 40 days in the desert. Everybody would be safer.

No dispute about one aspect of the E2EE roll out

To underline this last point, nobody, least of all Facebook, is contesting the idea that, without counter measures, E2EE will help all manner of criminals stay hidden and unaccountable, including child abusers.

The argument which is raging focuses on the types of counter measures which can be legitimately deployed to try to minimise the extent of the undesirable effects.

Apple steps up

The debate about E2EE was given fresh legs when Apple, traditionally one of the most privacy-respecting of all the major platforms, announced its intention to introduce client-side scanning (CSS) as a means of eliminating or reducing the extent to which its already encrypted messaging service could be used to transmit csam. The amount of csam being reported annually by Apple was (literally) unbelieveably low and this was plainly a source of embarrassment for the company so they decided to do something about it. Bravo.

CSS is a clever way of identifying csam on devices before it enters the unseeable spaces. In this sense CSS does not break the end-to-end stream or weaken the encryption. Neither does it raise any issues about so-called “back doors” or exceptional access. While Apple has yet to implement CSS in respect of csam, last week in the UK they implemented other elements of a series of child protection measures which work in a similar way, that is to say on the device. In that light it would be strange indeed if they did not now go ahead with their original plan for csam, or something very like it. I fully expect they will.

The possibility of abuse and “individual suspicion” 

Opponents of CSS claim it can be be abused. That is true. A lot of tech can be. The answer   is to find ways to prevent the abuse not prevent the tech from doing its good work.  Legally mandated transparency vouched for by an independent trustworthy source is the most obvious answer. Easier said than done but lots of things about this are hard.

Alternatively it is suggested CSS is a tool of “mass surveillance” or “general monitoring”.  The way Apple’s solution does it is highly targeted and specific, so it is no such thing.

Rooted in PhotoDNA, CSS can only see content which has previously been determined to be illegal. Nobody’s account, profile or message is checked or examined without probable cause. CSS provides the flag. I know of no case where PhotoDNA has got it wrong resulting in any individual being put in legal or reputational jeopardy.

If we are not allowed to deploy automated tools to try to detect likely signs of crime could someone hazard a guess as to how many new police officers we should recruit? I hope nobody has thrown away the Stasi manuals. They’ll come in handy.

US Supreme Court and the “dog sniffing cases

Under the 4th Amendment to the US Constitution people are entitled to protection from unauthorized or otherwise improper searches by “the Government “ (the annoying American way of referring to the police or their agents), and in this context tech companies attempting to detect csam are likely acting as “Government agents” even if there is no express legal requirement for them to do so.   

In US jurisprudence there are a range of interesting and apparently still unresolved legal questions which arise in relation to messaging apps.  However, in a separate but not too distant part of the forest, for example in the case of United States v Place, the Supreme Court held that the use of sniffer dogs at airports does not constitute an unlawful search. This case was followed in  Illinois v Caballes where the Court decided there was no  “legitimate privacy interest in possessing contraband”. One of the reasons why the Court felt able to reach the conclusion it did, by the way, is because the “sniffing dog” only identified contraband and nothing else. I think that’s a bit extreme but the point still stands. That is exactly what PhotoDNA does. It only sees illegal stuff. Nothing else.

At least one legal scholar seems pretty convinced that were the matter to come before the Supreme Court again in the context of, for example PhotoDNA being used to identify illegal material on the internet, the Court would “analogize” and reach a similar conclusion. That surely accords with common sense.

These cases remind us 

Having a ballot paper should count for more than being employed in tech or holding a shareholder certificate in a tech company, although in the case of Meta even that is pretty worthless as one person, Mr Zuckerberg, holds a clear majority of the voting stock. And he uses it.

I mention this because no democratic institution has ever endorsed privacy as an absolute or unqualified right.  No body of human rights law, international treaty or convention provides for or guarantees privacy as an absolute right or says it is a sine qua non of 21st Century life. 

Yet, even if not in theory, unless acceptable counter measures are put in place to address the emergence of E2EE in the context of mass messaging platforms, that’s exactly where we are headed. Who will pay the price? Not the Gilded Princes and Princesses of Silicon Valley or the tech-savvy lawyers who populate the world of privacy advocacy. As usual it will be the “poor bloody infantry”.

Are we truly willing to make children (and others) pay the price for tech’s inability to guarantee the integrity of the new world they have created?  When we know workable solutions are available? Now.

How has this situation come about? Only because some smart dudes and dudettes thought it was a good idea.  They took it upon themselves. They even managed to work out ways to make money from it, get a salary, so that’s a double hit.  With false talk about genies being out of bottles, are we to be bludgeoned into acquiescence or acceptance ? True enough, once tech is out there it is difficult or impossible to cancel it and call it back. But that’s not the issue here. The issue here is what we do to  limit the harm the genie can do. To fence him in a bit.

Privacy as an abstract idea

Everybody is or should be in favour of privacy as an abstract idea but when it gets down to brass tacks attitudes can shift markedly. In other words Meta’s statements about new norms and expectations referenced above look a bit like a contrivance designed to blot out important nuances. One might say “vital” nuances.

When privacy is no longer an abstract idea, clear majorities of adults (parents and “non-parents”) are in favour of doing what most of us would  hope and expect.

In ECPAT International’s “Project Beacon” (I am an adviser to the project) adults in eight EU Member States were asked a series of questions. 

Overall 68% supported the idea that the EU should legislate to make it a legal requirement for social media platorms to use automated tools to detect and flag signs of online sexual exploitation and abuse. In Italy and Spain the percentages rose to 75%, the Netherlands 72% , falling to 61% in Sweden. However, even in Sweden only 18% said they were against such legislation with 21% registering as “don’t know” or “prefer not to say”. 

And please note the substantial proportions of people who said they would be willing to give up some of their privacy if it meant children would be better protected. Mercifully, with CSS nobody has to give up any of their privacy because, to use the words of the US Supreme Court “there is no legitimate privacy interest in contraband.”

Maslow’s hierarchy

Most readers will be familiar with Maslow’s hierarchy.   He constructed a pyramid  to describe the human condition.  At the apex is “self-actualization” which, roughly or approximately translated, means “happiness achieved through personal fulfillment”. 

At the base of the pyramid are things like the ability to breathe, sleep, get food and water. The next layer embraces health, safety and the ability to take care of matters outlined in the first layer, for example by having a job that pays enough. The next layers go on to describe those elements of human existence which sort of make life worth living, that takes us beyond “nasty, brutish and short” e.g. family, friendship, self-esteem, confidence and so on.

I refer to this because, of late,  I have seen several references to children’s rights being “interdependent and indivisible”.  I quite accept that, but only up to a point. If we cannot guarantee the basic security of a child and their health then there is no “indivisibility or interdependence” with the other rights. They are all gone, reduced or compromised. Just look at the profiles of the populations of our prisons, of drug addicts and psychiatric patients, suicides and self-harm. Being sexually abused as a child can have ruinous, lifelong consequences.

So when people speak of the need to “balance” different children’s rights, unless it is  meant simply as a rhetorical flourish, there is an exceptionally heavy burden or priority which should steer us towards security.

Even where, as a percentage of all children, the proportion who become or are likely to become victims may be low, on the internet small percentages still generate very large numbers and the extreme nature of the consequences for  each individual child dictates that we must always err on the side of caution.

This may be a tad irritating. It may not always lead to a “smooth on-boarding process” or to a seamless journey in the metaverse or wherever, but if children are present in a given place certain things must follow. Just as they do in life outside of cyberspace.

Yes we must act proportionately but, like beauty, proportionality is in the eye of the beholder and the very least children’s advocates can do is make the case for children. Goodness knows there are enough people around to argue for everything else.

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International and is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John is now or has formerly been an Adviser to the Council of Europe, the UN (ITU), the EU and UNICEF. John has advised many of the world's largest technology companies on online child safety. John's skill as a writer has also been widely recognised. http://johncarrcv.blogspot.com
This entry was posted in Apple, Child abuse images, Facebook, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized. Bookmark the permalink.