I have just read “Balancing Privacy and Child Safety in Encrypted Environments” (the paper). If you click on the link you will see it was published by an outfit called “All Tech Is Human”. There are a few nuggets in the paper but overall I am sad to say it is not a serious document. It ought to be. The subject of child safety, privacy and encryption is hugely important but the paper resolutely refuses to rise to the occasion.
I might have written a fuller blog to provide more detail about why I think that, but I don’t have time right now so what follows is more by the way of selective notes on parts of the paper, not all of it.
On page 23 the following subheading rather jumped out at me
“Overemphasis on post-abuse response”
That’ll be news to a lot of people I work with.
Then there was this.
“Expand focus from post-abuse detection to include prevention”.
The continued publication or distribution of Child Sexual Abuse Material is continuing abuse, not “post” anything
There are several points in the document where solutions (e.g. PhotoDNA) which “only” detect already known csam are somewhat sniffily dismissed or minimised because they can’t detect previously unknown images (other solutions can) or because they don’t focus on preventing abuse from happening in the first place. The intellectual incoherence of this latter point is so transparent I’m almost lost for words.
Turning to the other bit, the continued availability of csam is not “post-abuse” in any sense at all. It is continuing abuse or a form of reabuse. And it also creates additional and wholly new dangers for the victims depicted. Thus the detection and removal of csam is, in and of itself without more, a prevention measure both for the victims depicted, the survivors, but also for children everywhere.
Here’s why.
To the extent the continued availability of csam encourages, promotes or helps sustain paedophile activity it poses a threat to children as yet unharmed (and children already harmed) anywhere in the world where children can go online, which means practically all children everywhere in the world, from Tennessee to Timbuktu.
Then there’s the separate matter of the privacy rights of the children in the images. Not discussed. How can that be in a document with “privacy and child safety” in the title?
Did the authors of the report not even bother to visit the web site of or talk to, for example, the Phoenix 11?
No child victim of sexual abuse can consent to their abuse or, if an image of the abuse is made, much less could they consent to images of their pain and humiliation being published or distributed online or anywhere.
Csam must be found and removed from public view as fast as possible.
If such images are allowed to proceed to be strongly encrypted that will defeat all currently known ways of detecting such images, and with no workaround.
Questions about how and in what form csam reports are received and processed by law enforcement agencies, the capacities and readiness of those agencies to receive reports, how investigations into the chain of offenders are conducted, even questions about how swiftly new victims can be identified, located and removed to a place of safety, though obviously enormously important in their own right, they are also entirely different from and ought never to be in conflict with the urgency of finding the images, removing them from public view and preventing their further distribution.
People who work for hotlines or who otherwise seek to secure the removal of csam from the internet ought never to think of themselves as being merely adjuncts of the police and law enforcement agencies. Obviously they will work closely with them, they have to, but their primary responsibility is to protect children. They do that by finding or receiving reports of csam and making it disappear.
Moving on.
The balance of the report
If you judge the matter solely on the basis of the words in the conclusions and recommendations, the paper’s title turns out to be a little misleading. Encryption is not mentioned at all in the recommendations and it gets a single mention in the opening paragraph of the conclusions. In the body of the report I would hazard at least half the words, if not more, discuss things other than encryption and striking or not striking any kind of balance.
Rather, a great deal of the report is devoted to listing and discussing good ideas for improving all kinds of online environments from the perspective of children’s rights and welfare. No complaints about that I suppose. However, in relation to the platforms, with one notable exception, many of these ideas have been kicking around and extensivley written about for almost twenty years, longer in some instances. They are all good things. They could and should be done anyway in, out of, or around encrypted environments. And they are not in any way whatsoever an answer or an alternative to the challenges, the threats posed to children, by the relatively recent mass deployment of strong encryption in large-scale consumer-oriented, consumer-accessible online messaging environments, some of which have a large presence or user-base of children, some of which don’t.
And for the avoidance of doubt, nobody I know is challenging all forms of encryption in any and all environments. The deployment of strong encryption in certain environments is what is at issue.
When strong encryption was only being deployed in limited places by tiny numbers of people, generally nerds who could handle clunky apps like PGP, circa 1992 and thereabouts, it presented challenges, but the scale was altogether different. It could be contained or ignored. Not any more.
This is 2024 not 1994 or 2004
Reading the paper, occasionally I had to pinch myself to remind myself of that fact.
Many readers will remember when self-regulation ruled supreme and was embraced by all (myself included). Everyone believed, as people of goodwill with only and always the very best intentions in terms of making the internet safer for children, all we needed to do was get around a table and talk things through. Action would be taken and things would get better. Such innocence. I’m almost embarrassed. I drank the Kool-Aid.
True enough some things did improve, low hanging fruit first, but then it soon started to become clear, even with the low hanging fruit, it was often not done fast enough, consistently or widely enough. In some key areas e.g. in respect of csam, it wasn’t done well enough either.
The volume of csam being detected is still on the increase and likely to be further boosted as AI applications start to create a whole new wave of pseudo images using, as in the recent Hugh Nelson case, real children, or by creating wholly artificial images which are indistinguishable from the real thing so still have the same lethal effects in terms of encouraging or sustaining paedophilic behaviour.
OK, back in those early days when even the leading companies still employed relatively few people, it may well have been the case they had little knowledge or expertise in respect of children in online spaces so they genuinely wanted to hear from children’s advocates. We were all finding our way a bit. But we are long past that point now.
The paper is a history-free zone
Getting back to the paper, one of its striking absences is any real discussion of why self-regulation, talking, has failed so comprehensively around the world. Instead, the paper is imbued with an uncritical acceptance of the idea that significant progress can still be made by, er, more talking.
It’s nearly Halloween. Are we witnessing an early attempt to raise self-regulation from the dead? Memo to self. It won’t work. Don’t worry.
The authors seem to think the more talking approach, the one they advocate, will mean the world will be spared the scourge of regulatory overreach. And that really is at the heart of their paper.
“Talk” is the word
There is even a recommendation to
“Create a new framework for dialogue and exchange”
Lordy lordy, like we don’t have enough spaces which allow for that? Will one more be all that is needed to tip the balance? I don’t think so. If the Olympics awarded medals for kicking cans down the road, or into the long grass, dragging things out, Silicon Valley would sweep the board every time.
Do people really believe that some of the biggest and richest companies on the planet, employing some of the smartest people alive today, actually don’t know what the problems are? They don’t have the capacity to engage in the deepest of in-depth research and data gathering?
Is it really possible they don’t know what can be done to solve known problems? Is it inconceivable they are choosing not to solve those problems because they have calculated it would harm their business model if they did?
Below is a copy of a Top Secret document only just discovered by me. It was written by a Very Important Person.
” OK Team, let’s just keep the money rolling in for as long as possible. We know the good times will end eventually, let’s hope and try to ensure it’s very eventually. Organize lots of conferences. Join lots of multistakeholder bodies. You know the drill. Show we care. Be humble. Doesn’t cost much but gives us lots of cover. Slows down the politicians and the journalists.”
Confession time. I made that up. But I bet something like it exists somewhere.
To put that slightly differently, if companies could spot a way to make more money by solving the known problems does anyone truly believe they wouldn’t do it in a flash? Of course they would.
Fines don’t matter
One of the things the paper is right about is fines don’t matter very much, at least not to Big Tech. They are treated as part of the cost of doing business. On the other hand, irrespective of the size of the company, criminal sanctions will bite, just like they are doing in the financial world post-2008. These days senior staff and Directors in the financial services industry pay attention in ways they never did before. They have no s.230 to shelter behind. We now have criminal sanctions in the UK for social media platforms. Watch this space. I wonder if the authors of the paper considered recommending criminal sanctions be used more extensively?
Don’t get me wrong. Dialogue and exchange can be beneficial but the basis on which one participates has to be set against some notion of, if not equality, that’s impossible, then at least greater equality. When David felled Goliath it might have been a lucky shot. Enough already with being David.
Also our expectations have to be set appropriately. Big Tech is not a friend even if we get on roaringly well with individuals who work for them. People who took on Big Tobacco, Big Alcohol, Big Pharma, Big Fossil Fuels have also made similar painful journeys. I feel sad (again) having to say that but there is no big kumbaya moment ahead between online child safety advocates and Silicon Valley.
If anything, we may be going into reverse gear. Zuckerberg seems to regret having previously accepted responsibility for too much stuff
“People are basically blaming social media and the tech industry for all these different things (that are wrong) in society”
No. My point is, Big Tech is not, or ought not to be ignorant of how things actually work in the world so if they put stuff out there that plays to, magnifies, facilitates, extends or is careless about bad things that are happening or are likely to happen, then that’s on them. And Big Tech also plays a part in shaping the way society works. It is not an inanimate object merely echoing noises made off stage.
Why have the EU, Australia, the UK and a lengthening queue of other jurisdictions decided to turn their backs on relying on companies to do the right thing only because it is self-evidently the right thing? Why are they turning instead to law and regulation? Not to the exclusion of dialogue and exchanges of ideas but as a framework within which everyone will be kept honest.
Even in the USA, few doubt it is the seemingly perpetual political deadlock in Congress, linked to the massive lobbying power (money) of Silicon Valley that has prevented any meaningful new Federal laws from being passed and led to a growing number of individual States taking matters into their own hands. And please note the many legal challenges there are to these state-led initiatives.
The Apple case
The paper’s discussion of the Apple case and of client-side scanning generally is both shockingly incomplete and one-sided. There is huge potential in client-side scanning.
While noting that more and more companies are turning to strong encryption it seems to be taken as read that this is inspired only by businesses’ concerns to respect their users’ privacy.
From being mega privacy abusers to being massively into privacy. All altruism? A Damoscene conversion. No mention of the fincancial benefits of extending strong encryption e.g. the reduced overhead costs arising from not needing so many moderators, no reference to a reduction in the number of “bad things” that can be attributed to your brand because you are no longer seeing and reporting them. No reference to bad publicity avoided, or of reduced legal liability, what’s not to like?
If you are the company, not the kid.
A “simple” truth
No international legal instrument, no national law of any kind in any country, and no decision of any superior court in any country has ever said privacy trumps or is superior to any and all other rights. Yet strong encryption in effect makes it so.
The paper speaks of a “delicate balance” between different rights. How can you have a “delicate balance” where one side is always and inevitably a big fat zero? It’s not delicate at all. It’s brutal. One side always wins.
Looking at metadata (which some are trying to encrypt or reduce btw), using AI to detect likely patterns of behaviour, it’s all good stuff, but at the end of it you still hit an impenetrable brick wall. What was the title of that old Meat Loaf song
“I’d do anything to protect children but I won’t do that.”
Hmm. Well it was something along those lines.
The long and the short of it
Some smart techies developed strong encryption programmes and, for their own reasons, either commercial or political, for-profits and not-for-profits took them up and decided to use them but did so in a way that facilitates the creation of vast spaces which, for practical purposes, are beyond the reach of the law. Subpoenas and court orders can be issued until the cows some home. They cannot be honoured.
Just as the US Founding Fathers never anticipated women might be allowed to vote, or that slavery might be a problem for a lot of people, so it was, in the shadow of World War Two and the Nuremberg trials, the people who drew up the Universal Declaration of Human Rights could not possibly have anticipated digital technologies and their potential impact on the ability of courts to enforce the law.
Crimes, frauds, scams of different kinds and other civil wrongs will go undetected or, even if they are detected, the evidence to prove them will not be available.
Demanding Governments employ more, better or smarter cops, or do more and better media literacy and education programmes, is all to the good but will not scale. And the authors must have or ought to have known such things will not happen anyway, not even in the richer global north, never mind elsewhere. Tech has created a problem only tech can solve.
Tech should not try to dodge the bullet with a load of “whataboutery”. Apologies if I am mixing metaphors here but tech should address the mote in its own eye before telling others about their shortcomings or using those shortcomings as a pretext for doing nothing or doing less than they could and should. Nobody in tech is really saying
“Yes we could do more to protect children, but we won’t unless the Government does as well.”
Are they?
It’s a Rule of Law question as much as it is anything else
The benefits of better privacy are clear but the price we have to pay collectively for absolute privacy raises wider societal issues.
In the liberal democracies this situation is not sustainable in the long run We elect Governments to make decisions on how our societies should be run. We hold them accountable through the ballot box and the courts. We cannot and do not delegate such things to private individuals or groups who have their own world view or their own distinct priorities.
A new institution is needed
I am pretty sure 100% of privacy activists would much prefer there was zero csam moving across or being stored on any internet connected device, network or service. Likewise they would much prefer if no online service or digital product could be used in a way that put children in any kind of danger. They also know there are things that could be done which would reduce the likelihood of either of those things happening but they don’t trust the corporations to do them or Governments to mandate them in ways that would not open up the possibility of abuse, by which I mean unlawful, arbitrary or unwarranted intrusion which would lead to unconscionable ouctomes for the individuals whose privacy rights were violated in that way.
It’s called living in a world of zero trust.
Obviously Governments have far more options available to them to behave badly than corporations do so I get why people are more wary of Governments than corporations, and to that extent lots of privacy activists find themselves reluctantly driven into the welcoming if temporary and highly instrumental arms of Big Tech.
But in a world of zero trust why should we believe anything anyone says? Including anything coming out of Silicon Valley, particularly these days.
A world in which everyone is marking and publishing their own homework is, I’m grasping for an appropriate word. I’ll settle for “unacceptable”, and true enough the paper does refer to the need for independent audits and access to data by independent academics, but I’m afraid that’s a nugget that disappears in the avalanche
It must be possible to construct new institutions in which everyone except those involved with the La La Land of QAnon or similar can repose sufficient trust, sufficient at least to acknowledge nothing crooked is going on with the tools. The only things that are happening are as advertised. Nothing more.
My major concern would be to develop and use tools that address crimes which involve the sexual abuse of children but I readily accept it would be impossible to limit the use of the tools only to that end.
Which is why it is doubly important the kind of new institution I have in mind is one in which we can all have the highest degree of confidence and that only serious crimes on a published list are being addressed.
Otherwise what are we saying?
“Yep! Tech has brought society many benefits but it has also enabled a load of what are, for practical purposes new crimes which are being committed on a grand scale, but we draw the line at using tech to prevent them. This is going to be with us in perpetuity. Get digitally literate or the devil will take the hindmost. Thank you and goodnight. That’s my last word on the subject.”
PS
Before you ask, the answer is “no I do not have a fully worked out model of what my ‘new institution’ would look like”.
PPS
The paper does mention this but does not give it the prominence it deserves, perhaps because it does rather weaken their overall argument about the protection afforded by current forms of strong encryption.
If you think strong encryption is protecting the messages you have already sent and will send today or tomorrow, just hope quantum computing doesn’t arrive any time sooner than expected and render every single one of those messages readable by……you name it.
Big Tech, everyone, should talk more about “Harvest Now Decrypt Later”. Governments around the world are talking about it. So are others.