Apple’s announcement about how it intends to address the problem of child sex abuse material (csam) being distributed using its devices and services shows what can happen when a serious company puts serious resources behind trying to solve a serious problem. It also explains why so many children’s advocates from around the world have been rallying in support of the company. This is a rare thing.
A letter applauding Apple’s stance was coordinated by ECPAT International and, in a matter of days, it attracted almost 100 signatures. Had there been time there would have been many more. But Apple is under attack so we felt we had to move fast.
Stand off in the Valley
It will not have escaped many people’s notice that Facebook had already declared it intended to do more or less the exact opposite of what Apple says it is now planning. And the only, or at any rate the first, major industry attack dog to criticse Apple’s plans was Will Cathcart of WhatsApp, a Facebook-owned company. The irony of the most privacy abusing company in the history of the internet, criticising the most privacy respecting one, on the grounds of privacy, sort of takes your breath away. Chutzpah thy name is Facebook. They should show a little humility.
But there is no threat to anyone’s privacy
Using a series of technical buffers all that Apple has done is create a means to spot csam. Nothing more. Nothing less. Think about a luggage scanning machine or a sniffer dog at an airport. Nobody’s suitcase or backpack gets opened without probable cause. Is that an abuse of privacy in any meaningful sense? Yet we all tolerate or even welcome it, maybe because we see an immediate benefit to ourselves as a soon-to-be passenger?
Here’s another analogy. Imagine you own a pair of glasses which only allow you to see red dots and you have no means or intention of recording anything anyway.
Billions upon billions of blue, green and yellow dots could pass in front of your eyes but it would be as if they were not there at all. Gone but not forgotten because they were never recorded to begin with. In this case the red dots are images of children being raped. These you check out before acting to delete them and reporting the apparent perpetrator.
What is wrong with that exactly? Is this a good thing to do or a bad thing to do? If you believe it is good, do it. If you think it is bad, don’t. There’s no need to go hunting for hypotheticals to provide an alibi for inaction.
So how did we get here?
Historically, the amount of csam being found on the internet and reported was quite small. This left open the possibility that was because the amounts “out there” were quite small.
Even as the internet started to grow, the numbers remained remarkably low.
In most countries countable reports of csam are made to hotlines and while not every country has one large swathes of the world’s population are covered. Thus, the numbers reported to and by hotlines have been the best indicator we had.
Look at the 2019 report of INHOPE (the global association of hotlines – I am a member of its Advisory Board). This is what it says:
“….. with capacity expansion in existing member hotlines… ( we have seen) double.. the number of CSAM related images and videos processed… from 2017 to 2019.
Statistics from 2019: `
- 183,788 reports were processed
- 456,055 images and videos were assessed
- 320,672 illegal images and videos were processed”
To explain these modest but nonetheless welcome numbers we need to look at several factors. Individually and therefore cumulatively these had a decisive influence on producing the figures shown.
- In most jurisdictions it was (and still is) illegal to go proactively looking for csam so, in theory, pretty much every report made to all or most of the hotlines whose efforts are reflected above was the result of someone “stumbling” across csam accidentally.
- Alternatively, perhaps a would-be reporter had been sent some csam on an unsolicited basis. Either way the individual concerned then had to have or find the time, inclination and determination to discover how to make a report and proceed to do so successfully. It’s not hard to work out the weakness and limits of this approach.
- If there was no mandatory reporting obligation on the firm that owned the server or service on which the csam was located, any report an individual might have made perhaps never found its way into the stats.
- Yet without doubt the single biggest factor which explains the small number of reports reaching INHOPE is the lack of proactivity. How do we know this? Because in another part of the same forest some hotlines are actually doing things differently, obtaining results which are orders of magnitude greater. See below. But first….
A children’s hagiography of the internet
A future history of the internet will identify a number of key events and players in the fight against csam.
- Founded in 1996 the IWF was one of the first hotlines in the world and the key mover in founding INHOPE.
- The IWF and BT get top spot on this list of major global actors because working co-operatively they devised a proactive method, Cleanfeed, for restricting access to places on the internet known to contain csam. This happened in 2004. It was an enormously important precedent.
- Even if the approach skirted the edges of legality the authorities made clear they would not authorise a prosecution because they did not see how it could be in the public interest to take to court anyone or anybody who was helping keep children safe.
- Cleanfeed probably succeeded because, although emerging as a voluntary measure, it got strong backing from the Government of the day, police, the Crown Prosecution Service and it enjoyed all-Party support in Parliament.
- The usual suspects were against it. They always are.
- Fairly soon every UK ISP and every mobile phone company signed up and followed BT’s lead, as did providers of WiFi access in public spaces. They all took the IWF list of urls to be blocked. The practice spread at home and abroad.
- The IWF also deserves special mention because of their muscular determination to make it possible for people in countries around the world to have a way of reporting csam even when their local population is too small to justify a full blown hotline.
- Next in the saintly series must come Microsoft and Professor Hany Farid for developing PhotoDNA. This happened in 2009. Along the same lines as the IWF list, it allowed for the construction of a database, a list, in this case containing unique digital fingerprints of known csam. PhotoDNA made it easier and quicker to find, delete and report offending items. The numbers began to climb steeply.
- The usual suspects were against it. They always are.
- The Canadian Centre for Child Protection then changed everything. First, they had the ingenious idea of linking the PhotoDNA database with a web crawler. In 2017 they set Project Arachnid loose and in six weeks identified 5.1 million web pages containing 40,000 unique csam images. This was miles away from anything that had ever been seen in the public domain. We all began to get a better idea of just how big the csam problem really was. The sense of urgency went up a beat.
- The usual suspects were against it. They always are.
- Next the Canadians did something else. In 2018, again for the first time ever, they made it their business to find, speak to, support, organize and help project the voice of victims of online csam. With the Phoenix 11 we heard from (now) adult women who had been sexually abused as children. They all knew pictures or videos of their abuse were still circulating on the internet.
- Nobody could be left in any doubt about the on-going harm being done to these brave women by the knowledge that pictures of their most awful humiliation and pain remained accessible in cyberspace.
- Equally nobody could doubt the authenticity of the victims’ anger at parts of Big Tech for failing to act in ways which had now been proven to be highly effective.
- The way the victims’ saw it their rights to privacy and human dignity were reduced to nought in the face of opposition from people who thought so many other things counted for more than they did.
- Last but by no means least, comes the US-based National Center for Missing and Exploited Children (NCMEC). Its hotline was founded in 1998 and has long been the benign and much respected 800lb gorilla in the global fight to combat csam. While to outsiders NCMEC’s freedom of action has often seemed to have been limited or circumscribed by bizarre and complex US Federal laws, they have stuck to their last and worked directly with companies and law enforcement to show what can be achieved. At scale.
- It is through NCMEC’s work with willing businesses that last year around 70 million individual items of child sex abuse were reported to them in over 21.7 million individual reports. And btw I’m writing this blog only days after reading of a case in the USA where one person was found in possession of over 8.5 millions images (he got 27 years jail time)
With apologies to John Travolta and Olivia Newton John “Scale is the word”.
Analogue thinking does not work in a digital world
Apple understands that scale is everything when dealing with csam.
What the usual suspects’ objections to Apple’s policy announcement come down to is this: Apple has devised a method which could be abused by being deployed for purposes other than the detection of csam. This is so transparently without intellectual coherence it is barely worth taking seriously. Is this the new standard?
“Say ‘no’ to anything that might be used in a bad way?” Isn’t it a bit late for that?
Is the whole of global Tech on hold in terms of protecting children until Kim Jong-un subscribes to the New York Times and promises not to be naughty any more?
Come in “permissionless innovation” your time is up
I am reminded of the mediaeval Catholic Church’s repeated attempts to stamp out avenues of scientific and philosophical enquiry because, human frailty being what it was, they suspected it would lead to no good. Look how that worked out.
Rather than banning research and development because it could be abused, we need trusted governance. We need transparency systems linked to sanctions which ensure companies do not allow the dollar signs to dazzle or drug them.
If a dictator asks you to do something you don’t agree with, walk away. But, please, don’t ask me not to invent good stuff that can protect children against the possibility that you might succumb to the pull of the filthy lucre at some indeterminate point in the future.
If I was a dissident in North Korea or several other places I think pretty much the last thing I would do is go on any bit of the internet to further my plans to overthrow the regime. Samizdat is on its way back. Tech is bringing us full circle.
The Venerable Alex Stamos speaks
Having earlier urged Facebook to show some humility when addressing Apple on the matter of privacy, I hesitate to question the judgement of Alex Stamos. He spoke about Apple’s apparent failure to build a “real trust and safety function” citing the absence of a “mechanism to report spam, death threats, hate speech, NCII or any other kinds of abuse”. Hmmm.
I am sure these are all fine things for an online business to have, and I would be delighted if Apple introduced them, but they are not a substitute for automated systems which can work at scale to protect children.
As NCMEC makes clear in the report cited earlier, of the 21.7 million reports it received, 21.4 million (99%) came from companies and of these it is understood the vast majority were generated by automated systems. Untouched by human hands. Unseen by human eyes (for which we should all be thankful).
In the case of NCMEC itself, only 1% of all the reports it received came from members of the public via a “mechanism to report spam, death threats, hate speech, NCII or any other kinds of abuse”.
You cannot build a digital universe then propose analogue solutions to the problems it creates or brings in its wake.