The wisdom of Max Schrems

I met Max Schrems at a seminar in a law school in the USA last year. He opened his remarks by saying in preparing his comments for the seminar he tried to talk to lawyers in the privacy community who specialised in or knew about children’s rights in the context of privacy law. What he said was “I couldn’t find anyone” or “there weren’t that many”. 

In part what we are seeing  in the current debacle in Brussels is a product of that. The privacy community is largely a stranger to the world of online child protection. That must change, and soon.

Here is my brief summary of yesterday’s meeting of LIBE followed by a few observations.


There is a lot of support for the temporary derogation but, as things stand, it may not be enough to get us over a satisfactory line. We need to keep lobbying.

There are still some worrying misconceptions and misunderstandings kicking around. Unless they are addressed they could sink the tools by making them useless.

Very restrictive

The lead Rapporteur, Birgit Sippel, seems happy to allow tools to continue to be deployed for up to two years providing they only identify material classed as “child pornography” within the meaning of Article 2 of the 2011 Directive.

I believe that would kill off classifiers and the anti-grooming tools. This must be resisted but I think, in part, some people’s doubts are based on a fundamental misconception in relation to how the technologies work (see below).

More problematic is Ms Sippel’s suggestion that nothing is reported to the police unless there has been prior human review. That defeats the whole point of automated proactive systems.  The numbers are just too big. That’s precisely why these tools were developed.

What is essential is that there is an exceptionally low error rate. Professor Hany Farid says PhotoDNA works with an error rate of around  one in a billion or less.

I don’t have a problem with Ms Sippel’s ideas around digital impact assessments, consultations or evaluations of the software, on the contrary they sound great, but they cannot be made conditions precedent because that, in effect, means halting everything until goodness knows when.

And the issue about data transferring to the USA could also be another serious obstacle.

Privacy as a barrier to child protection? No.

We want privacy to protect our health and medical records, to stop companies sneakily snooping on us so they can sell us more stuff,  we want it to protect our banking transactions, our national infrastructure, to force companies to take stronger measures to prevent hackers getting our personal data and, yes, to stop unwarranted invasions of our private lives and communications by the state and other actors, bad or otherwise.

But look at Facebook’s announcement last week. Children in all parts of the world were benefitting from protections Facebook had implemented to detect threatened suicides and self-harm. Everywhere in the world except the  EU. Done in the name of privacy.

Now it seems, also in the name of privacy, tools could be banned which help keep paedophiles away from our children or which help the victims of child rape regain their human dignity by claiming their right to privacy.

Not understood the technology

At LIBE there were several references to “scanning everybody’s messages”. That is not what is happening with any of the tools we are trying to preserve.

When we used to go to airports, dogs would walk around sniffing lots of people’s luggage searching for drugs and other contraband. The machines airport staff put our luggage through do something similar with x-rays. When we post letters or parcels the Post Office or the carrier employs a range of  devices trying to detect illegal items that might be in any of the envelopes or packages they are planning to deliver for us or to us.

Are the airport authorities or the postal services“scanning” everybody’s mail or luggage? No. At least not in any meaningful sense.

The child protection tools we are discussing are like the dogs at the airport, the luggage X-ray machines, or the devices in the Post Office sorting room.

They are looking for solid signs of illegal content or behaviours which threaten children. No sign. No action.

Could the tools be misused?

Could scanning tools be misused for other purposes? Yes they could. How we address that and reassure ourselves it is not happening is important but the tools we have been discussing have been in use, in some cases, for over ten years and we have ample evidence they are doing a good job. We have zero evidence they are doing a bad job.

Who would want to stop them doing that good job just because a variety of bureaucrats didn’t do theirs when they should?  That is what this boils down to.

We have to find a way to allow the tools to carry on while we construct a durable, long-term legal basis and oversight and transparency regime.

Those who claim protecting children in the way these tools can do is “disproportionate”  should recall that proportionality, like beauty, is in the eye of the beholder. And in every legal instrument I know we are told children require special care and attention because they are children.


About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International and is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John is now or has formerly been an Adviser to the Council of Europe, the UN (ITU), the EU and UNICEF. John has advised many of the world's largest technology companies on online child safety. John's skill as a writer has also been widely recognised.
This entry was posted in Child abuse images, Default settings, E-commerce, Privacy, Regulation, Self-regulation. Bookmark the permalink.