A very bad day for children in Europe

If you live in an EU Member State and you have used Facebook Messenger or Instagram Direct today you probably saw this message. “Some features are not available. This is to respect new rules for messaging services in Europe. We’re working to bring them back.”

This cryptic statement refers, among other things, to the fact Facebook have turned off their proactive child protection tools in every EU Member State.  This is because today the provisions of the European Electronic Communications Code (EECC) kick in.

But Microsoft, Google, Linkedin, Roblox and Yubo somehow managed to find a way to carry on with the tools. Well done them.

Given Facebook is the world’s largest source of csam and other child sexual exploitation materials reported to NCMEC and law enforcement, this is unbelievably disappointing.

This should never have happened in the first place BUT

OK, we should never have got into this position but where there is a will there is a way. Obviously with the five companies I just named there was a will to carry on protecting children by continuing to use the tools. They did find areas of doubt sufficient to justify a continuation. Facebook didn’t.

Facebook is usually not slow to act when an important commercial interest is threatened. Not here. Facebook rolled over.

Facebook is trying to reshape its image

Facebook is determined to appease and reach out to the privacy lobby. That is plainly an overriding corporate objective that trumps all others. Given the company’s previous lack of care and respect for their users’ privacy it is not hard to work out why they want to reposition themselves  in this way.

But children are paying the price for their inglorious corporate history.

Until this is put right – as it surely will be – how many pictures of children being sexually abused will continue to circulate on the internet? How many paedophiles will manage to connect with children?  How many existing victims will be harmed further, and how many new victims will there be? We will never know, but is unlikely to be zero.

Does Facebook really still have a Safety Advisory Board? Were they consulted about this, if so when and what did they say?

The anti-suicide and self-harm tools?

What about the tools which try to detect a child contemplating suicide or self-harm? Have they also been suspended? Maybe they haven’t but essentially they work in the same way as the anti-grooming tools and the classifiers used to detect possible csam. Facebook should put out a statement specifically commenting on that point.

Concrete results

Last month NCMEC published a letter  to MEPs in which they gave some hard numbers.

In 2019  NCMEC received 16.9 million reports referencing 69 million items of csam or child sexual exploitation. Of these “over 3 million” originated in the EU. That is 4% of the total, or about 250,000 per month. 95% of these reports came from services affected by the  EECC.  From these reports 200 children living in Germany were identified, as were 70 children living in Holland. In the same letter we see the 2020 numbers are going to be higher.

Knowing Facebook accounts for the great majority of reports to which the NCMEC letter refers, we can see the likely dimensions of what Facebook have done.

Shame on Facebook. Let’s hope they succeed in “bringing them back” as soon as possible. Then they can announce they are dropping or modifying their plans to encrypt the very same services.

UK exempt?

Why do the tools continue in use in the UK?  It seems because we adopted laws at national level which provide a good enough legal basis. Can it really be the case that no other Member State did the same? And if one or more did how can Facebook justify cutting them off?

This has been a bad day for children in Europe.

We are heading for a strange world

Privacy laws were never intended to make it easier for paedophiles to connect with children. They were never intended to make it easier for pictures of children being raped to be stored or circulated online. And it would be a strange world indeed if that is where we are heading.

If there truly is a legal problem here it cannot be one of substance. It can only have arisen because various bureaucrats and lawyers did not get all their ducks in a row and take all the right steps at the right time.

Instead of a brave stance in defence of children, Facebook has buckled in front of the remediable incompetence of others.

Posted in Child abuse images, Default settings, Facebook, Google, Microsoft, Privacy, Regulation, Self-regulation | 1 Comment

A new industry award

On this crucial day for children, as the EU’s “trilogue” meets to decide the fate of proactive child protection tools within the 27 Member States, I have decided to inaugurate a new annual award.

I think I will call it “The Techno Chutzpah Oscar” but if anyone can come up with a better name please let me know.

The Oscar will go to the company that most transparently and egregiously behaves or speaks hypocritically in the context of online child protection. And I have no hesitation naming Facebook the inaugural winner.

Here is an extract from the New York Times of 4th December 2020

“Facebook, the most prolific reporter of child sexual abuse imagery worldwide, said it would stop proactive scanning entirely in the E.U. if the regulation took effect. In an email, Antigone Davis, Facebook’s global head of safety, said the company was “concerned that the new rules as written today would limit our ability to prevent, detect and respond to harm,” but said it was “committed to complying with the updated privacy laws.” ( emphasis added)

This statement came from the company that normally goes straight to court when it decides it doesn’t like something a Government has said or done. Such legal actions sometimes win, sometime lose, but they almost always delay. Why not here? Why the immediate collapse without so much as the whiff of a writ?

Instead of  Facebook saying it is

“committed to complying with the updated privacy laws” 

could we not have heard the following?

“We saw the Opinion of the EDPS and we think it is rubbish. Facebook  believes there is a clear and firm legal basis which supports our use of proactive child protection tools. Our lawyers wouldn’t have let us deploy them in the first place were it otherwise. This legal basis is established under a variety of international legal instruments. In fact we would go further and say we believe we have both a legal and a moral obligation to use the best available means to protect children. We will vigorously defend that position in court should it prove necessary.”

But maybe the more obvious point, the one that gets them over the line and justifies the award of the first ever Techno Chutzpah Oscar is, lest we forget, Facebook  is the company that has acknowledged it has gigantic quantities of child sex abuse imagery being exchanged using its platforms but, nevertheless, still intends to encrypt the very services the new EU privacy law affects, if it remains unaltered.

If Facebook goes ahead with end-to-end encrytion in the way they have said, what happens with the EU law will not matter, at least not within EU Member States, because none of the tools will be able to penetrate the encrytion anyway.

Am I being unkind and cyncial? Was Facebook merely striking a pose to try to encourage the EU to do the right thing, in part because they have already decided internally to abandon end to end encryption? Answers on a postcard please to the usual address.

Posted in Child abuse images, E-commerce, Privacy, Regulation, Self-regulation | Leave a comment

Half a pat on the back

Thanks to tremendous lobbying and campaigning work by children’s organizations from across the world we have won the first part of what we wanted to achieve.

LIBE says “yes”

MEPs were tremendously impressed by the breadth and scale of support there was for the positions we took up on the Commission’s proposed derogation. It strengthened the hands of our friends in the European Parliament and hugely weakened our opponents.

The LIBE Committee today voted to put forward a report to the plenary meeting of the European Parliament next week. That means it should be possible for the Trialogue to meet and decide the matter in time to beat the 20th December deadline.

Here is the press release of the Child Rights Intergroup welcoming the decision.

But we can only give ourselves half a pat. There is more still to be done.

I say this because from the press release issued by the Parliament and from other reports, there are bits of what the Committee appear to have agreed which could still derail us.

No interruption or suspension of the tools

Conditions or riders have been attached to the continued use of the tools. If that means the tools are suspended for any period of time these conditions or riders must be resisted.

The conditions and riders are about transparency, accountability and reporting. These are things children’s groups should be very strongly in favour of but, at this late stage, to say they must be sorted out as a condition of continuing to use the tools seems utterly wrong.

So my suggestion is, over the next few days and into next week, we continue to lobby MEPs and national Governments – particularly the German Government – saying something along these lines:

  1. It is vital the Trialogue completes its work ahead of the 20th December deadline.
  2. We are concerned, however, that, even if the Trialogue does complete its work in time, if the LIBE decision is followed in total the use of the tools may be made conditional on terms that almost certainly cannot be met within such a short timescale.
  3. We have no problem or objection to stipulations about accountability, transparency or reporting mechanisms attaching to the continued use of the tools by companies. On the contrary we welcome them, but the only reasonable course of action is to allow these matters to be resolved during the period of grace which the derogation will establish or as part of the longer term strategy if that is adopted during the period of grace.

We can then turn our attention to what happens during the period grace and, above all, we can start to focus on working out what a long term policy will look like.

Here is NCMEC’s statement which also discusses that point.

Posted in Child abuse images, Privacy, Regulation, Self-regulation | 2 Comments

The questions to be asked in Brussels

Crunch time approaches in Brussels. Members of the LIBE Committee and later the plenary need to focus on the following questions:

  1. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to make it impossible for tech companies to prevent their customers from publishing, exchanging or storing images (still pictures or videos) of children being raped?
  2. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to prevent or delay the identification and removal from public view of images of children being raped?
  3. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to make it easy for sexual predators to locate and engage with children?
  4. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to prevent companies from trying to identify children who might be contemplating suicide or self-harm so as to divert them from that path?

I believe the answer to all of these questions is a simple, unqualified “no”.

Are there ways of deploying the kinds of child protection tools referred to which are entirely and unequivocally compliant with the highest privacy standards?

I believe the answer to that question is a simple, unqualified “yes”.

So now I am an MEP

Let’s say I am a Member of the LIBE Committee,  from Poland or Ireland – I am  an Irish citizen and I could become a Polish citizen. I am 100% in favour of protecting children to the greatest extent possible.  But what do I see?

A lack of transparency and safeguards

I have no evidence any company has behaved inappropriately or put anyone in danger, child or adult, when processsing data that might be associated with the deployment of child protection tools. All I have is a deeply rooted suspicion. Call it a hunch.

This deeply rooted suspicion was allowed to take hold and flourish because there is no trusted transparency regime with associated safeguards and metrics emanating from accountable public sources which could assure me all is well.

I am asked to take everything on trust. That is wrong. No other word for it and it must be addressed in the forthcoming Digital Services Act. But what do I do in the meantime?

Two wrongs do not make a right

I look at how poorly some individual Member States have responded to the child protection challenge, as evidenced, for example, by their failure to implement fully the terms of the 2011 EU Directive  but also by their failure to act more broadly in society at large where the bulk of child sex abuse and threats to children occur.

I conclude the national politicians responsible, maybe even in my own Party, are only paying lip service to the idea of protecting children.

I look at the patchy engagement of some law enforcement agencies.

I look at how the different child protection tools we are discussing have emerged from private tech companies, starting back in 2009, and finally I look at what I think are the failures of Commission officials and Member States to address all these things satisfactorily up to now.

I might even reflect on my own responsibility here. This is not my first term as an MEP.

But still. What do I do?

Having looked at what I  believe is a series of process failures and other shortcomings do I then decide my higher duty is to those processes? Do I vote to bring an end to the tools?  Even for a short while until the mess is sorted out and all the procedural ducks are in a neat, bureaucratically satisfying row?  Should I vote to throw out the Commission’s interim proposal? Should I refuse to give children the benefit of the doubt?

Absolutely not

Posted in Child abuse images, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

Privacy warriors arrive late

Governments and legislators stood by and watched for years while the internet exploded, bringing in its wake huge benefits but also several downsides, particularly for children.

“Permissionless Innovation” was the watchword. We even created special legal immunities to help things along, the idea being new stuff would be tried out around the edge of the network without anybody having to sign a form in triplicate, get a green light from “higher up” or worry about a writ or subpoena. This created a reckless culture which only now is beginning to be addressed in every major democracy. In the case of the EU this will be through the Digital Services Act.

Innovation under attack

Against this historical background, pardon me if a wry smile passes my lips when I hear the anti-grooming programmes, classifiers and hash databases being attacked. These are examples of innovation. These are examples of techies trying to find better ways of doing things, in this case keeping children safe. The very opposite of reckless. As Microsoft’s Affidavit attests, these tools are not supersmart tricks designed to make more money for whoever deploys them although given the history of Big Tech the suspicion that they might be is completely understandable.

And who is attacking the innovative child protection tools just mentioned? Not people who are habitués of platforms where children’s rights and safety are discussed.  Most of the attackers are substantially identified with completely different agendas, principally the privacy agenda.

Of course everybody is entitled to an opinion but if some of us who regularly plough the furrow of children’s rights and safety seem confused as to precisely why these privacy warriors are suddenly taking a deep interest in children, I hope they will not take it personally and understand why.

Is this what the drafters of the GDPR intended?

When passing the GDPR did the European institutions expressly intend to make it difficult to detect and delete images of children being raped? Did they knowingly plan to make it easier for a paedophile to contact a child?

No. The very idea is absurd

So if there is any legal basis at all for the critics’ arguments about proactive child protection tools, and I do not believe there is, it arises solely as an unanticipated, unintended consequence of a set of rules drafted principally for other purposes.

We need politicians to fix that problem, not manipulate or take advantage of it.

A collective mea culpa

If we had already constructed a transparency and accountability regime in which we all had confidence I doubt these issues would even be being discussed.  But we haven’t. For this we are all to blame, in varying degrees. The answer is to get on with building that regime not risk putting children in harm’s way.

I am certain much common ground could be found if we were not immersed in the unwanted, pressured environment the current, highly unusual circumstances created.

We shouldn’t confuse jurisprudence with politics

As in all things there will be issues of balance and proportionality but in Europe aren’t these, essentially, jurisprudential questions to be determined in accordance with, for example, the European Convention on Human Rights, the EU’s Charter of Fundamental Rights and case law?  Should I add the UN Convention on the Rights of the Child and the Lanzarote Convention, to which every EU Member State has signed up? You decide.

Politicians should not take it upon themselves to say “we cannot do this or that because it is illegal or we must do the other because the law requires it” if all that amounts to is using the law as a cover for politics, or as a way of dodging responsibility for something you know could otherwise be unpopular.

The institutions will not allow laws to pass which ex facie are illegal. And if they do, neutral judges will resolve things.

Zero evidence of harm. Tons of evidence of good

Where is the evidence the use of anti-grooming tools, classifiers or hash databases has harmed anyone? There isn’t any.

But we have lots of evidence of the good the tools are doing.


Look at the number of csam reports being processed by NCMEC and how many of these resolve to  offenders in EU Member States: 3 million in 2019 and until 1st October 2020 2.3 million. 95% of these were derived from messaging, chat and email services. 200 children in Germany were identified. 70 children in The Netherlands. And there is more of this kind of information available country by country.


Look at the concrete evidence showing how anti-grooming tools are protecting children in Europe. And the classifiers work in a similar way.

Between 1st January 2020 and 30th September NCMEC received 1,020 reports relating to the grooming and online enticement of children for sexual acts where these reports resolved to EU Member States.

905 were the result of reports made by the companies themselves, generated by their own use of tools. Only 105 were the result of manual reports by the public. 361 reports came from chat or messaging apps.  376 came from social media.  These led to action to save one or more children in Belgium, France, Germany, Hungary, The Netherlands and Poland. Tell me again why we should junk the tools?

Human review is an integral part of all the processes

There is always human review before any action is taken on something that is flagged by a classifier or an anti-grooming tool. Relying only on keywords is absolutely not what is happening. Context can be vital. But the tools do not comprehend, analyse, record or keep conversations or messages. They pick up on signs which are known to point to perils for kids. No signs. No action. Nothing happens. Just like sniffer dogs at airports.

And by the way, no image goes into a hash database of csam without it first having been reviewed, normally by at least three sets of human eyes. It does not need to be looked at again after that before it goes to law enforcement or before the image is taken down. That defeats the whole point of automating this part of the process. Among other things, don’t we want to minimise the number of times individuals look at things like that? Yes we do.



Posted in Child abuse images, Default settings, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized | 2 Comments

The wisdom of Max Schrems

I met Max Schrems at a seminar in a law school in the USA last year. He opened his remarks by saying in preparing his comments for the seminar he tried to talk to lawyers in the privacy community who specialised in or knew about children’s rights in the context of privacy law. What he said was “I couldn’t find anyone” or “there weren’t that many”. 

In part what we are seeing  in the current debacle in Brussels is a product of that. The privacy community is largely a stranger to the world of online child protection. That must change, and soon.

Here is my brief summary of yesterday’s meeting of LIBE followed by a few observations.


There is a lot of support for the temporary derogation but, as things stand, it may not be enough to get us over a satisfactory line. We need to keep lobbying.

There are still some worrying misconceptions and misunderstandings kicking around. Unless they are addressed they could sink the tools by making them useless.

Very restrictive

The lead Rapporteur, Birgit Sippel, seems happy to allow tools to continue to be deployed for up to two years providing they only identify material classed as “child pornography” within the meaning of Article 2 of the 2011 Directive.

I believe that would kill off classifiers and the anti-grooming tools. This must be resisted but I think, in part, some people’s doubts are based on a fundamental misconception in relation to how the technologies work (see below).

More problematic is Ms Sippel’s suggestion that nothing is reported to the police unless there has been prior human review. That defeats the whole point of automated proactive systems.  The numbers are just too big. That’s precisely why these tools were developed.

What is essential is that there is an exceptionally low error rate. Professor Hany Farid says PhotoDNA works with an error rate of around  one in a billion or less.

I don’t have a problem with Ms Sippel’s ideas around digital impact assessments, consultations or evaluations of the software, on the contrary they sound great, but they cannot be made conditions precedent because that, in effect, means halting everything until goodness knows when.

And the issue about data transferring to the USA could also be another serious obstacle.

Privacy as a barrier to child protection? No.

We want privacy to protect our health and medical records, to stop companies sneakily snooping on us so they can sell us more stuff,  we want it to protect our banking transactions, our national infrastructure, to force companies to take stronger measures to prevent hackers getting our personal data and, yes, to stop unwarranted invasions of our private lives and communications by the state and other actors, bad or otherwise.

But look at Facebook’s announcement last week. Children in all parts of the world were benefitting from protections Facebook had implemented to detect threatened suicides and self-harm. Everywhere in the world except the  EU. Done in the name of privacy.

Now it seems, also in the name of privacy, tools could be banned which help keep paedophiles away from our children or which help the victims of child rape regain their human dignity by claiming their right to privacy.

Not understood the technology

At LIBE there were several references to “scanning everybody’s messages”. That is not what is happening with any of the tools we are trying to preserve.

When we used to go to airports, dogs would walk around sniffing lots of people’s luggage searching for drugs and other contraband. The machines airport staff put our luggage through do something similar with x-rays. When we post letters or parcels the Post Office or the carrier employs a range of  devices trying to detect illegal items that might be in any of the envelopes or packages they are planning to deliver for us or to us.

Are the airport authorities or the postal services“scanning” everybody’s mail or luggage? No. At least not in any meaningful sense.

The child protection tools we are discussing are like the dogs at the airport, the luggage X-ray machines, or the devices in the Post Office sorting room.

They are looking for solid signs of illegal content or behaviours which threaten children. No sign. No action.

Could the tools be misused?

Could scanning tools be misused for other purposes? Yes they could. How we address that and reassure ourselves it is not happening is important but the tools we have been discussing have been in use, in some cases, for over ten years and we have ample evidence they are doing a good job. We have zero evidence they are doing a bad job.

Who would want to stop them doing that good job just because a variety of bureaucrats didn’t do theirs when they should?  That is what this boils down to.

We have to find a way to allow the tools to carry on while we construct a durable, long-term legal basis and oversight and transparency regime.

Those who claim protecting children in the way these tools can do is “disproportionate”  should recall that proportionality, like beauty, is in the eye of the beholder. And in every legal instrument I know we are told children require special care and attention because they are children.


Posted in Child abuse images, Default settings, E-commerce, Privacy, Regulation, Self-regulation | 2 Comments

Don’t be a child in Europe

Yesterday the European Data Protection Supervisor (EDPS) published an opinion on the European Commission’s proposal for a temporary suspension of parts of the e-Privacy Directive of 2002. It is a weak Opinion, riddled with error. The good points the EDPS makes are dwarfed and completely overshadowed by the bad.

A rebuke

A major part of the Opinion, in essence, is a rebuke of European Institutions for not doing things in precisely the right order, in exactly the right way at the right time.   The Opinion shows an abundance of bureaucratic correctness which entirely misses the human heart of the issues at stake, as well as important parts of the law.

Everywhere else, in every legal instrument I have ever read, including the GDPR, we are told children require special care and attention. Why? Because they are children. The EDPS affords them no such considerations. 

Article 24 of the Charter of Fundamental Rights

The EDPS makes no reference to the explicit language of the EU’s Charter of Fundamental Rights. Nada.  Pas un mot. As an aide-memoire I repeat the key words here:

The rights of the child

  1. Children shall have the right to such protection and care as is necessary for their well-being…..
  2. In all actions relating to children, whether taken by public authorities or private institutions, the child’s best interests must be a primary consideration.

The EDPS never once even mentions the rights of children. If there is a balance to be struck he shows no signs of knowing how to locate the fulcrum.

A child’s right to privacy? Not mentioned

Search the document high and low. There’s nothing there. No mention of the legal right to privacy of a child who has been raped where pictures of the rape have been distributed for the whole world and her classmates to see. Not one word.

A child’s right to human dignity? Not mentioned

Neither is there any mention of a child’s legal right to human dignity which, in this case, entails getting the images of their humiliation off the internet, away from public view, to the greatest extent possible, as fast as possible. Not one word. 

The EDPS misunderstands the technologies

The technologies being debated do not understand the content of communications. They work in an extremely narrow and specific way.

If I go to a zoo wearing spectacles that only allow me to see zebras, the giraffes, lions and penguins will be invisible to me. They may pass in front of my unseeing eyes, but they might as well not be there. All I see are zebras.

This is how PhotoDNA works.  The EDPS is therefore simply, factually wrong when (page 2 and paras 9 and 52) he suggests there is any

“monitoring and analysis of the content of communications”

PhotoDNA only sees the zebras. In this case the zebras are the already known images of a child being sexually abused. That is to say an image that should not be there in the first place, which nobody has any right to possess never mind publish or distribute.

And the other child protection tools work in similar ways. They do not “analyse” the content of a communication. They cannot say what the picture is about or what a conversation is about. They can only say whether the communication contains known signals of harm or known signals of an intention to harm a child.

Do we really want companies to be indifferent and inert?

Does the EDPS want companies wilfully and knowingly to blind themselves to heinous crimes against children? Is he suggesting they should be indifferent to and inert towards what they are facilitating on their platforms?

A resolution of the European Parliament says otherwise

Law enforcement agencies have repeatedly stated it is completely beyond them to address these issues alone. They rely and depend on tech companies doing their bit, a fact recognised by the European Parliament less than a year ago.  On 29th November 2019 in a resolution  at para 16 we see the following:

“Acknowledges that law enforcement authorities are confronted with an unprecedented spike in reports of child sexual abuse material (CSAM) online and face enormous challenges when it comes to managing their workload as they focus their efforts on imagery depicting the youngest, most vulnerable victims; stresses the need for more investment, in particular from industry and the private sector, in research and development and new technologies designed to detect CSAM online and expedite takedown and removal procedures;”

How do scanning tools work?

The EDPS makes no reference to other types of scanning taking place on an extremely large scale, such as for cyber security purposes.  At a webinar organized by the Child Rights Intergroup on 15th October Professor Hany Farid made the following observations (at 24.28):

“If you don’t think that PhotoDNA and anti-grooming have a place on technology platforms then I ask you to do the following: turn off your spam filter, turn off your cybersecurity that protects from viruses, malware and ransomware because that is the same technology. And if you believe that we should use a spam filter and if you believe that you should protect your computer from viruses and malware, which I think you do, and if you believe that that technology has a role to protect this computer right here, then why shouldn’t these technologies protect children around the world? At the end of the day it is exactly the same technology, simply tackling a different problem.”

No mention of Microsoft’s Affidavit

On 14th October Microsoft published a sworn Affidavit in which the following words appear at para 8:

“PhotoDNA robust hash-matching was developed for the sole and exclusive purpose of detecting duplicates of known, illegal imagery of child sexual exploitation and abuse, and it is used at Microsoft only for these purposes.”  

At a LIBE Committee meeting it was suggested that companies were scanning content, ostensibly looking for illegal content then processing the data they collect for commercial purposes. Leaving aside the fact that this would be illegal anyway, the Microsoft Affidavit, under acknowledged pain of perjury, expressly states that is not happening.

Microsoft also published the terms of its licence which gives other companies and organizations permission to use PhotoDNA.

The EDPS makes no reference to the Affidavit. If it would help preserve the use of online child protection tools, surely other companies would be willing to swear similar Affidavits? Such Affidavits could remain in force at least until this matter is resolved, and even beyond if necessary.

The EDPS says he is worried about precedents

The EDPS says (para 53):

“The issues at stake are not specific to the fight against child abuse but to any initiative aiming at collaboration of the private sector for law enforcement purposes.” (emphasis added).

Here the EDPS abandons lawyer’s clothes and dons those of a (not very skilful) politician or campaigner.

This is the notorious “slippery slope” argument. It is morally and intellectually bankrupt.  A demagogue’s trick. A sleight of hand.

The unnamed terror

What is the unnamed terror the EDPS is worrying about?  We are not told. Isn’t the position clear? The proposed suspension is entirely and only about the protection of children. Nothing else. Nothing that isn’t written in the document.

It is quite wrong and legally completely incorrect, to plead a concern for something that is not on the table, not in anyone’s line of sight.

If something comes up in the future deal with it on its merits.  If you agree with it say “yes”. If you don’t, say “no”.  Lawyers are meant to be able to distinguish between cases based on the facts.

Punishing children for other people’s mistakes

I have no brief to defend the Commission, much less the history of events leading up to  their proposal. But whatever the history, it is completely unacceptable to allow the tools to become illegal on 20th December only because nobody managed to sort this out to the satisfaction of the EDPS before now. 

That amounts to intentionally putting children in danger, punishing them for the past failures of others, adults who should have known better and acted differently sooner. Shame, shame.

Don’t be a child in Europe

Next week at the LIBE Committee meeting, if Members of the European Parliament are persuaded by the EDPS report, if it is ultimately reflected in the decision of the upcoming Trialogue and the tools are outlawed,  my advice is clear: “don’t be a child in Europe.”

Be a child somewhere else.

Posted in Child abuse images, Privacy, Regulation, Self-regulation, Uncategorized | 2 Comments

Joy tinged with anger

At 5.00.a.m. today the Head of Instagram published a blog entitled “An important step towards better protecting our community in Europe”

There is much that is important and of interest in Facebook’s blog so please read it but here, for me, are the key sections:

“We use technology to help.. proactively find and remove..suicide and self-harm content…Between April and June this year, over 90% of the suicide and self-harm content we took action on was found by our own technology before anyone reported it to us. But our goal is to get that number as close as we possibly can to 100%. 

Until now, we’ve only been able to use this technology to find suicide and self-harm content outside the European Union. 

European children deprived of protection

So children and young people everywhere else in the world have been benefitting from Facebook’s deployment of proactive tools which help stop young people killing or harming themselves. Children in Europe haven’t been. Why?  To answer that we have to look to the Irish Data Protection Commissioner (DPC).

Seemingly, having started monitoring this type of content in 2017,  Facebook raised the matter with the DPC back in March 2019.  The DPC “strongly cautioned Facebook because of both privacy concerns and a lack of engagement with public health authorities in Europe on the initiative.”

Facebook followed the DPC’s advice and consulted with health authorities. Nevertheless   the DPC still said “concerns remain regarding the wider use of the tool to profile users.. culminating in human review and potential alerts to emergency services”.

You might want to read that again. It’s hard to believe  anyone could be anxious about the possibility an ambulance or a police officer could go knocking on a door in the expectation of saving a life and for that to be frowned on or obstructed. Certainly in the UK we are constantly told to contact the emergency services if we have any reason at all to suspect someone is in danger, particulary if that someone is a child.

Just to remind you, in the GDPR and in every legal instrument I know, the position of children is said to require extra care and attention. Yet  it is starting to feel that whenever a traditional privacy lawyer writes or drafts something things end up all wrong. Go figure.

And by the way there are no issues of principle associated with Facebook sending a message to the police or the ambulance service if someone has made an individual, manual report to them about a person they believe is at risk.  It is only if the tools are deployed proactively, at scale, that the DPC gets agitated.

So a malicious  or mischievous report gets acted on, while a genuine one can’t be found by a machine. Where’s the logic in that?

Have we taken leave of our collective senses?

Could the tragic death of Molly Russell have been avoided if these tools had existed then? Who can say? But equally I am certain I will not be alone in wondering what kind of world we are creating if, in the name of privacy, we allow these things to happen when we had the possibility of stopping or reducing them.

We have been content to allow the internet to do things that not many years ago would have seemed utterly unbelieveable. Saving children’s lives? That’s where we draw a line?

Emotional? Too right it’s emotional

I have heard it said that we shouldn’t be too emotional about these questions. Excuse me. What that is actually saying is we should detach ourselves from our humanity. It hardly matters to me what impact technology might have on a lump of concrete or other inanimate object but, if you have it within your power to stop pain, death or suffering by another human being, only a dessicated robot could turn away and say “no”.

The technology that has built huge fortunes for entrepeneurs  and pays vast salaries to its employees who know the colour of your socks, where you go on holiday and what you eat for breakfast cannot be turned to saving lives? I understand about “balance”  and “safeguards” but whenever I hear those words what I am usually hearing is “no” again.

It’s not about privacy. It’s about trust

The mantra of the internet has been about innovation and the wonderful benefits technological advances can produce.

So now technology allows us to detect when a child is contemplating killing themselves.  We have technology which allows us to detect when a paedophile is attempting to groom a child. We have technology which can help protect the privacy rights of children who have been raped and further humiliated by having images of their rape broadcast to the world.

Why would we not use them?

Because some people do not trust Big Tech to use these tools lawfully i.e. in ways which do not exploit people’s  data in a manner that the law anyway already forbids.

The real answer, therefore, is to address the lack of trust in Big Tech. And that means addressing transparency. And the fact that our politicians and institutions have so far failed to do this is no reason, now, to make those tools illegal. That is treating a symptom not the disease. We need to get at the disease.

My next blog

I fear my next blog will not be a happy one either.  Yesterday we had great news about LIBE agreeing to take the item on 16th November and that remains the case. But other things have happened  today. Watch this space. It ain’t over ’til it’s over.

Posted in Child abuse images, Default settings, Facebook, Privacy, Regulation, Self-regulation | Leave a comment

Nuremberg and the internet

Many people who read “East West Street” by Philippe Sands QC, may have been surprised to learn it was the horrors of the Second World War which propelled the international community – as represented by politicians, mainly elected ones – to come together and formulate a set of magnificent  documents which would constitute the core of what we now recognise as international “human rights law”.

The Charter of the United Nations was adopted in October 1945.  The Universal Declaration of Human Rights in 1948. Many human rights instruments which emerged in the ensuing years can be traced to these two seminal, post-war moments and arguments heard or developed at the Nuremberg Trials.

The UN Convention on the Rights of the Child

Beginning in 1979 the Polish Government initiated the processes which, in 1989, led to the adoption of the United Nations Convention on the Rights of the Child (UNCRC). 

What do all of the above have in common? They predate the internet and the massive availability of digital technologies.  In astonishing ways which would have been hard to predict even twenty years ago, never mind in 1948, digital technologies have changed the way we live.

In the case of the UNCRC, the language used is so out of step with the contemporary realities of children’s lives a “General Comment” has been commissioned to act as an aid to interpretation, specifically in respect of the digital environment. You have until 15th November to make your views known.

The General Comment is not going to change any of the words or principles set out in the UNCRC. There is no need for that. As with the Universal Declaration of Human Rights, the values it enshrines are eternal. Or ought to be. But, as with the UNCRC, so also with the Universal Declaration and similar. We have to start adjusting how we approach matters in a way which is consonant with the digital age. Some of the habits and ways of thinking developed in the analogue era are now obsolete or obsolescent.

There is nothing new under the sun

There has always been crime. There have always been threats to children, the weak, the gullible, the ill-educated or illiterate. Threats to national security and democratic processes are not entirely novel. But the speed, scale, complexity, and the international dimension to the kind of  behaviours the internet has facilitated have created enormous difficulties yet to be solved. They will not be solved by people who believe this nonsense:

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”

It was singularly apt that this “Declaration of the Independence of Cyberspace” was made, in 1996,  on or around the tenth anniversary of and perfectly reflecting Ronald Reagan’s immortal contribution to political thought.  “The nine most terrifying words in the English language are: ‘I’m from the Government, and I’m here to help’.”  And where was this utterly up-itself Reaganist utterance made? Davos. Where else?

Governments are a long way from being perfect instruments of, well, almost anything, but they are all that the vast majority of people have or can turn to when faced with overwhelming or complex threats to the commonwealth.

The highly educated, tech savvy activists will always or at any rate generally be able to look after themselves in cyberspace.  Governments are for the rest of us. The challenge here is, through the ballot box and our own engagement with political processes, to make those processes better not give up on them by ceding territory to the geeks. Elections are our shareholder meetings where nobody has a super veto.

The tide is turning

In every democratic country in the world the tide is turning. In the USA there is EARN IT. Section 230 has been trimmed back and will be trimmed further. In the EU the Digital Services Act hoves into view.  In the UK the Online Harms White Paper will soon be upon us. Look at Germany, France, Australia and many other places. Why now?


Knowledge of the internet is being democratised 

Historically, too few judges, politicians, policy makers, mainstream journalists and community activists had a good understanding of the internet or the underpinning technology. It emerged so fast. We were awe struck and dazzled.   To quote Arthur C Clarke, this stuff really did look like “magic”. We fell for the Silicon Valley schtick.

The techie magicians might have worn jeans and T-shirts, but we now know that was only to hide the suits as the early idealism was smothered by Wall Street.

Knowledge of the internet has been democratised by our experience of it. People are no longer intimidated by the jargon. Democracy trumps technocracy when it comes to social policy and  we all now know the social consequences of tech matter. Hugely.

Do I have blind faith in all political institutions and the police and security services which are meant to serve them? Of course not. Only an idiot would think that.  Look at Snowden and Echelon.

Quis custodiet ipsos custodes?

This is a question that is almost as old as the hills. All public institutions and Big Tech must be bound by laws and we must develop effective, independent transparency regimes to ensure those laws are being routinely kept not routinely broken. But equally we must not cut off our noses to spite our faces until we reach that happy point.



Posted in Child abuse images, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

I am not going to say “I told you so”

I generally find it extremely irritating when people turn to me and, usually with a smug look, say “I told you so,” so that won’t happen here. With little additional comment I will merely draw your attention to a report which was released in the USA last month.

First point: it was produced by a body called the “Coalition for a Secure and Transparent Internet”. Its mission is to“advocate before U.S. and EU policymakers, ICANN, registrars, registries, and other stakeholders about the importance of open access to WHOIS data.”

Slightly surprised the word “accurate” does not appear between “to” and “WHOIS” but for most sensible people I guess that would be implied.

Congressman Robert Latta asked several US Federal Agencies for their views on the state of play with WHOIS, referring specifically to the current Covid crisis. This, inevitably, raised broader issues.

In September CTSI published the replies the Congressman had received. Below are a few choice extracts.

From the Food and Drug Administration

“Access to WHOIS information has been a critical aspect of FDA’s mission to protect
public health. Implementation of the E.U. General Data Protection Regulation (GDPR)
has had a detrimental impact on FDA’s ability to pursue advisory and enforcement
actions as well as civil and criminal relief in our efforts to protect consumers and patients.”

From the Federal Trade Commission

You also highlighted your concerns that the implementation of the
European Union’s General Data Protection Regulation (“GDPR”) has negatively affected the ability of law enforcement to identify bad actors online. I share your concerns about the impact of COVID-19 related fraud on consumers, as well as the availability of accurate domain name registration information.”

From Homeland Security

“HSI views WHOIS information, and the accessibility to it, as critical information required to advance HSI criminal investigations, including COVID-19 fraud. Since the implementation of GDPR, HSI has recognized the lack of availability to complete WHOIS data as a significant issue that will continue to grow. If HSI had increased and timely access to registrant data, the agency would have a quicker response to criminal activity incidents and have better success in the investigative process before criminals move their activity to a different domain.”

From the Department of Justice/FBI

“…greater WHOIS access for law enforcement would increase the effectiveness
of… investigations by identifying illicit activity in specific areas, and would assist in
disrupting and dismantling criminal organizations.”

How did we ever get to this?

That is an excellent question. I’m glad someone asked it.

I agreed about the need for ICANN to be given complete independence from the US Federal Government. But the Obama Administration handed over control without dotting the i’s and crossing the t’s. They left ICANN with the ability to abandon or substantially modify their historic mission, at least in respect of WHOIS.

Once free of a potential corrective intervention by the US Federal Governmenet ICANN became ever more obviously a trade association, a racket.

The public interest always comes second to Registrars’, Registries’ and their symbiotic co-dependent’s (ICANN’s) financial interests.

ICANN has weakened WHOIS, not strengthened it. They have reduced the obligations to ensure WHOIS data are accurate and that also means up to date. Link that with other real world developments about how the internet is being managed, and by whom, and anyone with two brain cells can see the future. But that won’t stop the Registrars, Registries and ICANN from dragging things out for as long as possible. Delay for them is the same as money. And money is what it is all about.

Could the US Government reverse its decision and take ICANN back under its wing? Probably not, but if it were shown ICANN acted in bad faith from the get-go, with no serious intention ever to fulfill or keep to the terms of the “Affirmation of Commitments”…. then what?

Who was asleep at which wheel?

The EU must take its share of the blame for what happened next, at least insofar as it concerns WHOIS.

In the four years or more between the draft GDPR being published and it being adopted as a final, legal instrument, none of the following words were uttered, never mind discussed, anywhere at any time in Brussels, at least not in any public meetings where minutes were taken and later published. Those words were: ICANN, Registrars, Registries, Registrants and WHOIS.

It’s not that the EU took its eye off the ball. They never had their eye on it. It was only after the event that officials went in to bat to limit the damage once the scale of ICANN’s impudent ambition became apparent. Why was it necessary for them to do that? Because ICANN had adopted an interpretation of GDPR rules which would never have been possible if those rules had been properly drawn up in the first place. And that interprtetation is the reason for those comments shown above.

Finally, here is the other nagging question. If EU bureacrats were not over-familiar with ICANN’s quaint ways and hidden intentions. If they had been lobbied, seduced, hoodwinked or neutralized by the hype, where were the cops and the governments?

A perfect smokescreen

Mainstream media journalists’ eyes glaze over at the first mention of ICANN’s recondite terminology. They shy away when they hear about the glacial pace at which things happen in obscure, acronym-heavy sub-committees. That creates a perfect smokescreen.

Nobody comes out of this covered in glory, other than the Registrars, Registries and their servants the ICANN bureaucracy. They got exactly what they wanted. Perhaps“glory” is the wrong word here?

A friend of mine who was once utterly immersed in ICANN and similar bodies, e.g. the IGF, reflected how, in the early days, there was a group of high-minded, public spirited people who flew around the world convinced their personal engagement with this still relatively “new thing”, the internet, and the participatory bodies which it was spawning e.g. ICANN and the IGF, was truly going to reshape that world and make it a better place. “Noblesse oblige”. Then they woke up and realised they’d been had.

Posted in Uncategorized | Leave a comment