Nobody had spotted it. The European Commission openly acknowledged an error had been made. If left uncorrected it would bring to an end measures which have been protecting kids since 2009.
On 10th September the Commission published a proposal. It describes the problem and, pending the development of a permanent or longer term solution, suggests the status quo is preserved at least until 2025. Phew! That’ll do. Disaster averted.
What was the mistake?
If the mistake is not rectified, when the European Electronic Communications Code comes into effect on 20th December this year, it will become illegal for a range of online businesses operating within the EU to continue or begin using automated proactive tools to try to detect child grooming behaviour, use PhotoDNA or similar to identify hashes of known child sex abuse still images or videos, or use classifiers to spot images likely to contain child sex abuse material so they can be sent for human review.
How well have these sorts of tools been working up to now? Absolute proof is impossible but to get some insight just look at the last annual report of the USA’s hotline(NCMEC).
According to NCMEC, in 2019 16.9 million child sex abuse images were reported to them and deleted. 99% were discovered as a result of the use of the kind of tools that are now otherwise under threat.
At the LIBE Committee earlier this week the Commission’s remedial proposal ran into some heavy weather (the relevant part of the video starts at 10.36).
Yet several of the points some of the MEPs made at the Committee meeting were perfectly reasonable. It is earnestly to be hoped Commission officials and MEPs can work something out that will also meet with the approval of the Council of Ministers.
A failure to put this right, now we can see it full on, would not be an oversight or a second mistake. It would be something far, far worse.
A ban on innovation to protect children?
If I have any criticism of the Commission’s proposal it is because it seems to be suggesting only child protection technologies currently in use and well-established will be covered and therefore allowed. Presumably somebody in Brussels is drawing up a list?
This looks perilously close to saying innovating to protect children is being made illegal. One imagines updates and fixes will be allowed so this could easily get extremely messy.
Would it not be better and simpler to describe technology neutral general principles governing the use of proactive, online child protection tools? Providing any new tools that might come along conform with those principles, they are in the clear.
Lack of trust and transparency
Obviously there is not a single MEP who wants to help sexual predators to groom children. Neither is any MEP unconcerned about the circulation of child sex abuse images.
Thus, the discontent being expressed at the LIBE Committee meeting principally was an echo of the lack of trust in tech companies. This is something European institutions could and should have addressed before now, but the fact that they have not done so should not lead to children having to pay the price.
One MEP mentioned the possibility that companies currently proactively looking for illegal images or grooming behaviour might be deliberately acquiring data to use for commercial purposes.
The fact is the Commission’s proposal expressly states such behaviour would be illegal, as it would also be under the GDPR, so once again we are back to the lack of trust which in turn is rooted in zero transparency.
This is one of the key aspects of the reforms to internet regulation to be addressed in the planned Digital Services Act and explains why the Commission describes the 10th September proposal as being only for the interim.