The EU is on the edge of making another child safety blunder

On Friday of this week there will be a meeting, I assume in Brussels, to discuss progress on on the draft ePrivacy Regulation currently making its way through the EU’s machinery. If when you have read this blog and you feel so minded please feel free to contact someone in your Government to let them know what you think they should say at that meeting. Remember what John Stuart Mill (and many others) said (paraphrasing)

For bad stuff to happen all it needs is for good people to do nothing

I was contacted by several good people who did not want to do nothing. They told me

The EU is about to pass a law which will make it illegal for online businesses to deploy  or continue deploying PhotoDNA and similar tools to detect child sex abuse material as part of a process which leads to its deletion and reporting to the police.

It is probably best if I do not repeat the precise words I used when I heard this. Suffice to say I expressed profound scepticism.

I mean so far this year, for example in the USA, NCMEC has received over 13 million notifications of csam and around 99% were generated thanks to PhotoDNA. By the end of this year the number is likely to be over 20 million, and these images will have been located and deleted before anyone reported them! This does not definitely mean they were deleted before anyone other than the originator had actually seen them but it might do.

Who could possibly want to stop something like this from carrying on?

Nevertheless, mindful of my own injunction, I contacted several people and in the end spoke to one of the lawyers who is involved in writing the Regulation.

In essence I asked two questions

Did the drafters of the Regulation intend to outlaw, reduce or limit the scope for companies to continue their pre-existing practice of deploying PhotoDNA or similar tools which are designed to identify child sex abuse material in the form of videos or stills?

and

Irrespective of the intentions of the drafters, is such an interpretation of the current wording possible and reasonable?

The answer to both questions should have been a simple “no”. It wasn’t.

Proactive scanning for illegal, unlawful or prohibited content or behaviour is today a standard feature of a great deal of security-oriented activity. It can be extremely important, for example, in terms of analysing metadata to detect suspicious patterns of behaviour such as grooming.

For these reasons I thought the current wording was just sloppy, written by someone who did not think it through. Any confusion surrounding the use of PhotoDNA and similar could be rapidly and easily cleared up.

Imagine my surprise, therefore, when yesterday into my inbox dropped  the proposal of the Austrian Presidency. It will be discussed this Friday. If you read it you will notice two things: it at least recognizes there is an issue with csam but it also appears to acknowledge that the Regulation does make the use of PhotoDNA illegal because it draws attention to the fact that Member States can derogate from that part of the Regulation.

There might be a good many reasons why a national Parliament cannot get a derogation  measure through their local machinery within a given time frame so what is to be gained by allowing a patchwork of laws to emerge around an issue of this kind?

In  Article 25 (1) of the EU Directive of 2011 a rule is stated in crisp clear language. It says every Member State must have machinery to allow people to report csam. No exceptions. No variations. Today they all comply. That is how it should be when dealing with csam.

I guess the larger question is how can it be, yet again, that the EU has bungled and blundered its way into a mess like this? What is missing in their machinery that allows stuff like this to happen? Repeatedly.

The EU has rested on its online child protection laurels for too long. Increasingly the Brussels rhetoric and the reality are diverging but I will return to this at another time.