Governments and legislators stood by and watched for years while the internet exploded, bringing in its wake huge benefits but also several downsides, particularly for children.
“Permissionless Innovation” was the watchword. We even created special legal immunities to help things along, the idea being new stuff would be tried out around the edge of the network without anybody having to sign a form in triplicate, get a green light from “higher up” or worry about a writ or subpoena. This created a reckless culture which only now is beginning to be addressed in every major democracy. In the case of the EU this will be through the Digital Services Act.
Innovation under attack
Against this historical background, pardon me if a wry smile passes my lips when I hear the anti-grooming programmes, classifiers and hash databases being attacked. These are examples of innovation. These are examples of techies trying to find better ways of doing things, in this case keeping children safe. The very opposite of reckless. As Microsoft’s Affidavit attests, these tools are not supersmart tricks designed to make more money for whoever deploys them although given the history of Big Tech the suspicion that they might be is completely understandable.
And who is attacking the innovative child protection tools just mentioned? Not people who are habitués of platforms where children’s rights and safety are discussed. Most of the attackers are substantially identified with completely different agendas, principally the privacy agenda.
Of course everybody is entitled to an opinion but if some of us who regularly plough the furrow of children’s rights and safety seem confused as to precisely why these privacy warriors are suddenly taking a deep interest in children, I hope they will not take it personally and understand why.
Is this what the drafters of the GDPR intended?
When passing the GDPR did the European institutions expressly intend to make it difficult to detect and delete images of children being raped? Did they knowingly plan to make it easier for a paedophile to contact a child?
No. The very idea is absurd
So if there is any legal basis at all for the critics’ arguments about proactive child protection tools, and I do not believe there is, it arises solely as an unanticipated, unintended consequence of a set of rules drafted principally for other purposes.
We need politicians to fix that problem, not manipulate or take advantage of it.
A collective mea culpa
If we had already constructed a transparency and accountability regime in which we all had confidence I doubt these issues would even be being discussed. But we haven’t. For this we are all to blame, in varying degrees. The answer is to get on with building that regime not risk putting children in harm’s way.
I am certain much common ground could be found if we were not immersed in the unwanted, pressured environment the current, highly unusual circumstances created.
We shouldn’t confuse jurisprudence with politics
As in all things there will be issues of balance and proportionality but in Europe aren’t these, essentially, jurisprudential questions to be determined in accordance with, for example, the European Convention on Human Rights, the EU’s Charter of Fundamental Rights and case law? Should I add the UN Convention on the Rights of the Child and the Lanzarote Convention, to which every EU Member State has signed up? You decide.
Politicians should not take it upon themselves to say “we cannot do this or that because it is illegal or we must do the other because the law requires it” if all that amounts to is using the law as a cover for politics, or as a way of dodging responsibility for something you know could otherwise be unpopular.
The institutions will not allow laws to pass which ex facie are illegal. And if they do, neutral judges will resolve things.
Zero evidence of harm. Tons of evidence of good
Where is the evidence the use of anti-grooming tools, classifiers or hash databases has harmed anyone? There isn’t any.
But we have lots of evidence of the good the tools are doing.
Look at the number of csam reports being processed by NCMEC and how many of these resolve to offenders in EU Member States: 3 million in 2019 and until 1st October 2020 2.3 million. 95% of these were derived from messaging, chat and email services. 200 children in Germany were identified. 70 children in The Netherlands. And there is more of this kind of information available country by country.
Look at the concrete evidence showing how anti-grooming tools are protecting children in Europe. And the classifiers work in a similar way.
Between 1st January 2020 and 30th September NCMEC received 1,020 reports relating to the grooming and online enticement of children for sexual acts where these reports resolved to EU Member States.
905 were the result of reports made by the companies themselves, generated by their own use of tools. Only 105 were the result of manual reports by the public. 361 reports came from chat or messaging apps. 376 came from social media. These led to action to save one or more children in Belgium, France, Germany, Hungary, The Netherlands and Poland. Tell me again why we should junk the tools?
Human review is an integral part of all the processes
There is always human review before any action is taken on something that is flagged by a classifier or an anti-grooming tool. Relying only on keywords is absolutely not what is happening. Context can be vital. But the tools do not comprehend, analyse, record or keep conversations or messages. They pick up on signs which are known to point to perils for kids. No signs. No action. Nothing happens. Just like sniffer dogs at airports.
And by the way, no image goes into a hash database of csam without it first having been reviewed, normally by at least three sets of human eyes. It does not need to be looked at again after that before it goes to law enforcement or before the image is taken down. That defeats the whole point of automating this part of the process. Among other things, don’t we want to minimise the number of times individuals look at things like that? Yes we do.