The dust has settled on the European elections. The President of the Commission has been appointed, congratulations to Germany’s Ursula von der Leyen. However, we don’t know who the individual Commissioners will be, much less how the portfolios will be separated or combined before being distributed. The process of appointing the Commissioners starts soon but nobody officially takes up their position until 1st November. Meanwhile Euro-business is getting going again with expert working groups busying themselves tying up loose ends from the last Commission or preparing the ground for expected initiatives.
The e-Privacy Regulation swings back into view
A new draft text has been issued for our old friend the “Proposal for a Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications).
It’s like the famed curate’s egg. There are good bits and not so good bits.
Page 4, paragraph 6, is a good bit. It says:
“With regard to the child imagery issue, the Presidency has introduced in art. 29(3) a provision [3(b) iv] that allows providers to continue processing electronic communications data for the sole purpose of detecting, deleting and reporting material constituting child pornography(sic), if they started such processing before the entry into force of the ePrivacy Regulation and the technology used fulfils a number of conditions listed in the provision. The end date for this provision is to be discussed…”.
I wish the EU would find a way to drop that horrible phrase – “child pornography” – nobody who works in the area who has any sensitivity or understanding of the issues uses it. We are talking about child sex abuse material (csam).
Thus, going forward, at the point when the Regulation comes into force, companies already engaged in attempting to detect, report and delete csam using hash-based technology e.g. PhotoDNA, will be allowed to continue so doing. In previous versions a blanket ban was proposed. The latest text therefore represents progress. Bravo.
Unintended and unacceptable consequences
But I doubt I am the only person feeling puzzled.
If relevant companies are not engaged in trying to “detect, delete and report” material constituting csam before the Regulation comes into force, shame on them, but why prevent them from choosing to do so after the Regulation has become operative?
We shouldn’t be talking about this as if we were looking for a compromise to protect existing investments. We should be talking about a matter of principle and the principle is simple: if technical tools are available which can help a business detect, delete and report csam then the business should be free to use them. No ifs, not buts. Some might argue they ought to be required to use them.
The proposal as it stands opens up the possibility of unevenness and divergence in respect of children’s safety as between otherwise identical online services, the only difference being when an online business saw the light. What might be the unintended consequences of that? Criminals shifting from old school platforms to new ones?
And they are already contemplating its demise
Elsewhere, Article 29 of the draft Regulation (see page 87) addresses the timing of when the Regulation comes into force and its application. But it also discusses when this part of it will end. The date is as yet unspecified.
I note, though, in para 2 on the same page there is a crossed out provision suggesting at one point somebody was thinking it might only endure for 24 months. That is alarming.
Moreover it does not square with Article 28 which speaks about evaluating the measure’s effectiveness on a three yearly basis. That is a great idea and very welcome but, naturally, one would expect the systems to have been working for at least three years otherwise a proper evaluation will not be possible, at any rate not within this frame.
Effectiveness matters
Obviously it is right that the continuation, amendment or even termination of the new law, maybe any law, should be based on the outcome of an evaluation of the effectiveness of its operation. By the same token it means the idea of fixing a forward date for ending it, without knowing how well it has worked, does not add up.
A further amendment is needed
There is more.
Article 29 (3)(b) (iv) forbids anyone storing “….. electronic communications data, except in the cases where material constituting child pornography (sic) has been detected by virtue of a hash.”
I assume this is intended to provide a legal basis for retaining any data necessary to allow the appropriate authorities to investigate a crime and bring a case to court and for companies to assist in that activity. If so that is good but it needs widening.
Technologies are available now which, using Artificial Intelligence, can identify and flag items which are very likely to be csam but are waiting in a queue to be confirmed as csam (or rejected because they are not csam). Google’s system is called a “Content Safety AI”. Facebook is known to have its own tool for doing something similar although it is not yet available outside the company.
Whether or not an image is found, reported and confirmed as csam depends on a great many variables. But, that is only the first step in a process. After being confirmed as csam, the image has to be hashed and, crucially, entered into a usable database. It is this database of hashes that allows others to deploy it for the purposes of “detecting, deleting and reporting material constituting child pornography (sic)”. Until now, adding to, managing and distributing hashes has been the product of a substantial, collective effort.
But this second stage is also dependent on a number of factors, one of the most important of which is the availability of human resources.
That is because those responsible for administering some of the major hash databases insist, before the hash of an image can be introduced to the database, the image itself has to be seen by three sets of human eyes i.e. two sets of eyes additional to those provided by the original confirming organization. This is to ensure consistency and quality control.
Enormous backlogs
At the moment there are enormous backlogs of images in the queue. For example, the Canadian Project Arachnid so far has captured 10.9 million images that were initially thought to contain csam or other sexualised material harmful to individual children.
Via a collaborative endeavour by a network which includes several hotlines, including some in the EU, so far only 850,000 images have been through the “three sets of eyes process”. Of these nearly 300,000 meet the INTERPOL baseline definition of csam and therefore have been transferred into a usable database of hashes. Around 10 million images are still waiting.
Another unintended consequence?
Is it the Commission’s intention to make the operation of such systems illegal? Does the Commission intend to make it unlawful for companies or other organizations to try to detect and store or otherwise process images which are very likely, eventually to go into a database of csam hashes but which are not, at a given moment, confirmed as a csam hash? That is how I fear the current proposal will be interpreted.
The Commission surely does not want to stop organizations looking for material which is likely to be csam then store it pending an evaluation which would determine whether or not it is, in fact, csam?
Wouldn’t this mean only companies or organizations outside the reach of EU law could do this kind of work? Companies or organizations inside the EU would be made to wait until someone else found qualifying images, converted them to a hash and put them in a database? Another unintended and in this case also a ridiculous consequence?
We need some new words to clear this up. Urgently.