In 2021 NCMEC received 29.3 million reports of child sexual exploitation, typically a still or video image of a child being sexually abused. This was an increase of 35% on the year before. Less than 1% of the 29.3 million came from individual end users. The great bulk were made by electronic service providers proactively using automated tools linked to a PhotoDNA database, or something similar. This approach is expressly allowed under EU, US Federal and UK law, as well as the laws of a large number of other jurisdictions. Businesses can choose to do it on a voluntary basis and many have.
The reports contained 39.9 million still images and 44.8 million videos, a combined total of 84.7 million items. These are significant numbers.
A huge proportion of the reports came from Meta, derived from Instagram or Facebook Messenger, but there were also substantial numbers from hundreds of other companies. A full list of these companies is provided by NCMEC. How many reports came from Apple? 160. I have not missed off any zeros or commas by mistake. 160 is emphatically not a significant number. And it was down slightly on the previous year (265). Ditto.
Meanwhile in Australia
Australian e-Safety Commissioner Inman Grant published her first transparency report on Monday of this week. In it she notes Apple does not
attempt to proactively detect previously confirmed child abuse material
Commissioner Inman Grant added
some of the biggest and richest technology companies …. are turning a blind eye, failing to take appropriate steps to protect the most vulnerable from the most predatory
Quite.
Going back to NCMEC’s numbers, even if you took Meta out of the frame altogether it is still difficult to reconcile the microscopic volume of reports made by the Cupertino Colossus with those being made by almost everybody else. How do we explain this loud silence on the part of Apple? Simple. Encryption. Apple cannot see what is being exchanged or stored using its systems. The others can.
Is there anything unique or distinctly different about Apple’s user base? That seems highly unlikely. Rationally, you would expect Apple’s systems to be used by child abusers to a greater extent than those of other providers precisely because they are aware of the way encryption works within the existing Apple eco-system.
Eric Friedman speaks
On 20th August 2021 the following statement appeared in public linked to the name of Eric Friedman, Apple’s anti-fraud chief. It had been extracted from a statement made in court documents filed the year before in 2020.
we are the greatest platform for distributing child porn
Eric Friedman knew the scale of the problem on Meta and yet he said that!
The document was marked
highly confidential – attorneys’ eyes only
Hey ho.
In the same court document we see internal emails in which, when challenged about his assertion by another Apple Executive, Friedman does not retract, qualify or modify his earlier statement. Quite how Friedman could know anything about how much csam is being distributed over Apple’s messaging systems or is being stored in iCloud is another matter but he is still employed by the company and one doesn’t imagine he spoke in complete ignorance of what was going on. There has been no disavowal of any kind.
Apple decides to act
Apple made an announcement which at least to some degree seemed to acknowledge the substance of Friedman’s 2020 claim, and acknowledge and accept the company had an obligation or a responsibility to act directly on it. I put the word “directly” in bold and italics for a reason. Apple’s proposal was not saying it would engage only in educational and empowerment initiatives. No. It would do that but it was also going to go at the problem full frontal, not shying away from identifying and deleting criminal child sexual abuse material. Bravo!
In that specific regard the centrepiece of the August 2021 announcement was a proposal to deploy a proactive PhotoDNA-linked solution, thereby bringing Apple in line with many of its competitors (see the NCMEC list). Apple proposed to deliver on their promise using client-side scanning i.e. on the device. However, if you click on that link I just gave you, you will find no reference to this idea. It has been edited out. Why? Because Apple has changed its mind and, inter alia, made a crude attempt at air-brushing history. What does that remind me of? Never mind. Such juvenile tomfoolery is comparatively trivial when set against the larger crime of reversing the original decision.
But first. What cannot be erased from the public record is Apple’s own stout defence of their original decision. This was mounted in the immediate aftermath of a highly choreographed, plainly premeditated and prepared onslaught against it.
Craig Federighi speaks
When the attack came Apple appeared to be determined to stick to their guns. To stand up for what they said they believed. In a Wall Street Journal article Craig Federighi, senior vice president of software engineering tells everybody their original solution contains
multiple levels of auditability
going on to say Apple considers itself to be
absolutely leading on privacy
adding he saw
(this new child protection measure as) an advancement of the state of the art in privacy….enabling a more private world
George Orwell would be proud (not)
In a classic example of Newspeak we are now told Apple has decided not to
move forward with our previously proposed….detection tool….
Instead the company is going to
deepen its investment
in child safety. Why? Because according to Apple
Children can be protected without companies combing through personal data
In other words, Apple has dropped a tool of proven effectiveness for reasons which are diametrically opposed to what they said twelve months ago. And they call this
deepening its investment
Peace is war. War is peace.
Apple has not said their original solution was flawed or faulty in any way
It has simply given in to bullying although others more cynical than I have whispered that there is likely also a commercial aspect. There always is with Apple. I would like Apple to name the experts they say they consulted before deciding not to prioritise children’s safety in the way they previously intended. Everyone I know who has experience and expertise in online child protection applauded and lauded Apple’s original decision.
Laughably, Apple employees are saying the blame for the reversal is being laid at the door of the backlash that followed the company’s announcement, a backlash they accept they had no small part in creating because of the way they bungled the communications. But even accepting that as a possible explanation it is barely credible coming from a company of Apple’s size and stature.
Retaliation of a strange kind
I therefore mention only to dismiss Machiavellian suggestions that elements within the company opposed to client-side scanning from the start, deliberately engineered the communications foul up in order to provoke the backlash that followed.
A forewarned, well-prepared privacy lobby went into action almost before the ink was dry. The poor old, under-funded, under-prepared, not forewarned children’s lobby had to rush around to get its retaliation in. Retaliation of a strange kind I might say. It was retaliation in support of Big Tech. In support of Apple. We have been rewarded by a kick in the teeth. That’ll learn us. Trust? Hard to earn. Easy to lose.
People like Tim Cook should have the guts to stand up and defend something which was not only technically sound but so very obviously the right thing to do.
By all means invest away in more of a wide range of child and parental empowerment tools and other safety initiatives. We can probably never get enough of them. But it should not be at the expense of doing something concrete and immediate which has repeatedly been shown to work in a most critical area.
One thought on “A loud silence”
Comments are closed.