Regrets? I’ve had a few….

Last week (20th January) the UK Parliament’s Home Affairs Select Committee interviewed representatives of Facebook, WhatsApp, Twitter, Google, Snap and Tik Tok.

The Chair of the Home Affairs Select Committee is Yvette Cooper, an intellectual heavyweight of the first water. You had to feel a modicum of sympathy for the hapless folk the companies fielded. But only a modicum. A mini modicum.

Inevitably, on Inauguration Day in the USA, much of the Committee’s focus was on Trump, Trumpism and the post-truth world that helped create and sustain both. 6th January figured large.

To their credit none of the company representatives sought to deny or minimise the role  social media businesses played leading up to and including 6th January. The air was full of regrets for not acting sooner or differently. Phrases like “we are still learning”, “we know we must do better”, peppered the replies to MPs’ questions. All this put me in mind of Professor Sonia Livingstone’s aside to me in correspondence about the importance of

“breaking the cycle of

  1. Putting a new product or service into the  market
  2. Waiting for civil society to spot the problems and families to experience them
  3. Taking belated action.”

I might have added

4. Then being ready with self-deprecating comments like “we know we must do better” and “we are still learning“.

The disarming humility and contrition doubtless are genuinely meant at the time by the people speaking for their employers but humility and contrition butter no parsnips. Particularly when similar things keep on keeping on. There is a limit to the price societies can be expected to pay to allow companies the “freedom to innovate” . We are about to find out where that boundary lies. s230 is heading for the exit.

Facebook and end-to-end encryption

Yvette Cooper and others also raised questions about Facebook’s plans to introduce end-to-end encryption (E2E). In particular Cooper wanted to know what impact Facebook themselves  thought this would have on their own ability to detect child sex abuse images currently being exchanged via Messenger and Instagram Direct.

Monica Bickert’s reply was certainly truthful, in a literal sense, but it was also incomplete to the point of being deceptive. Her answer to Cooper’s question was

“I don’t know but I accept the numbers will go down”

Future hypotheticals

Bickert added that she thought the numbers would probably go down anyway because of other measures the company was taking. In other words the drop in numbers that is coming if things go ahead as planned may partly be down to Facebook simply being more effective in discouraging illegal behaviour which threatened or harmed children. Cooper exposed this as self-exculpating baloney.

Turns out it largely hinges or depends on planned educational initiatives designed to help children avoid abusive individuals and situations in the first place.  Not exactly mind-blowing or revolutionary. In fact it is the kind of stuff they are already doing and if all Bickert is saying is they will do more of it or better then bring it on. It is welcome even though a tad oblique as compared with straightforward detection, deletion and reporting, which was the main thrust of Cooper’s questioning. Cooper was not asking about images that might not be created or exchanged or paedophiles who might be avoided.

46% decline in 21 days

Cooper referred to numbers published some time ago by NCMEC. These suggested if Facebook went ahead with E2E there could be a 70% drop in images being detected, deleted and reported. That’s globally.

What Cooper evidently did not know, but Bickert  must have, was the day before the Select Committee meeting NCMEC had published new data showing the known or actual effect of Facebook ceasing to be able to detect child sex abuse in the manner they had hitherto.

Because of the fiasco with the European Electronic Communications Code, on 20th December in all EU Member States Facebook stopped scanning  for child sex abuse materials. Stopping scanning has exactly the same effect as introducing E2E.

On 19th January, NCMEC’s new published numbers showed in the 21 days immediately following 20th December there had been a 46% drop in reports from EU countries.

Excluding the UK, in the three weeks prior to 20th December NCMEC received 24,205 reports  linked to EU Member States. In the three weeks afterwards it dropped to 13,107. We will never know which children were in the 11,000 images that weren’t  picked up.  How many were new images, never seen before, with all that that entails?

So when Cooper asked, as she did twice, about the likely effect of introducing end-to-end encryption Bickert was truthful when she said she couldn’t say but she might have at least mentioned the numbers NCMEC had just published. Then she could have explained why a 46% drop, or worse, concretely, not hypothetically, is a price worth paying.

Facebook blames their customers

Cooper persistently challenged Bickert as to why they were going ahead with E2E at all when they knew it will mean more children will be put in harm’s way, more perpetrators will go un-caught and un-punished. Bickert’s answer was, er, “surprising”. 

Bickert referred to a survey of British adults who, seemingly, listed privacy related concerns as their “top three”. I am not sure which survey Bickert had in mind, she didn’t say, but if it was the 2018  Ofcom one she might have read a little further and seen “the leading area of concern” is the protection of children. But even if that was not the case,  whether or not children were “listed” in the top 50 concerns expressed by adults, teens or stamp collectors for that matter, what was  Bickert really saying?

“Don’t blame us. We’re only doing this because it’s what the dudes want and our job is to give it to them.”

An industry standard?

Bickert and her colleague from WhatsApp shifted their ground a little saying “strong encryption is now the industry standard” as if this was the key justification for going ahead with or retaining E2E.  Cooper pointed out that Facebook was a major part of the industry so that amounted to rather transparent, self-serving circular reasoning. Moreover in other areas Facebook has  repeatedly shown it is willing to strike out alone and not just follow the herd. They cannot now shelter behind the actions of others.

The underlying reasons?

Suggesting  something is an “industry standard” is simply a less vulgar or less pointed way of saying “our revenues will likely be badly impacted if we don’t do this”. It’s a variation on the dudes theory expounded earlier. In other words it is about money.

Secondly, how did we get to a point where the dudes seemingly feel they need to have E2E? Isn’t it because of the previous actions and admitted failures of companies like Facebook?

So first they create the problem and then they come up with the wrong answer to it. Chutzpah on stilts.

Facebook’s “pivot to privacy” is alliteratively admirable but not in any other way. It is about Facebook trying to repair its appalling image in the privacy department, based on its history of not respecting or taking sufficient care of its users’ privacy. It is acting now in order to continue generating gigantic quantities of moolah.

Towards a very dark place

We may never know what role encrypted messaging services played in organizing and orchestrating the events of 6th January but few can doubt that the unchecked growth of strongly encrypted messaging services is taking us towards a very dark place. A place where child abusers as well as fascist insurrectionists feel safe.

In and of itself strong encryption is not a bad thing. Indeed it is now essential in many areas.  But in the wrong hands, used for the wrong purposes, it can facilitate a great deal of serious damage. We have to find a way to ensure that does not happen. If companies like Facebook do not find a way of doing that, they will have one thrust upon them. The Silicon Valley experiment has run its course. It will soon look different.

Posted in Child abuse images, Facebook, Google, Internet governance, Regulation, Self-regulation | Leave a comment

Adding insult to irony

If, like me, you were brought up a Catholic or in another Christian denomination, likely you will know 6th January is celebrated by the faithful as the “Feast of the Epiphany. Well, in a secular sense, 6th January 2021 definitely was an epiphany for us all, meaning a moment of profound revelation.

Witness five dead bodies in or around the US Congress and a televised attempt to frustrate the outcome of an election in order to preserve in office a liar and a cheat who openly incited violence while actively seeking to undermine the Constitution.

Of course the events of that day did not come out of nowhere. They reflect a deeper malaise and divisions. Rooted in disillusionment, the American Dream is not delivering for them, a great many angry people found confirmation bias in the constant stream of falsehoods and distortions fed to them by Trump and his fellow conspirators.  Never have the consequences of allowing a “Post-Truth” society to emerge and grow been more clearly in evidence.

Can there be any real doubt about the role social media companies played in creating, sustaining and amplifying the societal fissures that brought us to 6th January? Let’s not get into the practical, organizing role social media also played in orchestrating the murderous assault. That’s for another day. Will we ever know how much was done through strongly encrypted channels? Probably not.

It doesn’t stop there

The aftermath of 6th January 2021 then saw private entities, companies, silencing the President of the United States and effectively shutting down a speech app (Parler) altogether or very substantially.  In so doing Silicon Valley added insult to irony.

They gave Trumpism a megaphone and, in the name of free speech, timorously stood back, letting it blossom as the dollars rolled in. It was only when Trump went almost foaming-at-the-mouth insane and the scenes of 6th January were televised, that the inescapable and repeatable logic of the laissez faire s.230 nightmare was fully, unavoidably exposed.

Then some of the same companies decided to shut Trump up. It’s hard to think of this either as a step too far or as a step in the right direction because in a better and more rational world the need for it to be taken at all would never have arisen. What started as a benign experiment with technology brought the USA, and therefore the world, to the edge of disaster.

The amount of sympathy I have for Trump or Parler can be measured only in large minus quantities. That is not the point. What is the point is the egregious presumption of private bodies deciding to make public policies in areas of fundamental importance to our whole way of life.  De haute en bas they float above us mere mortals and tell us when we meet their  exacting standards and punish us when we don’t. Sadly they are constantly deflected by the desire to earn money. This clouds their vision from time to time.

Fundamentally this is a failure of governance

A great many idealistic people who were disgusted with the shortcomings of mainstream politics either in their own country or globally, or both, saw the internet as a way of establishing a whole new set of possibilities.

Then the money moved in. The money saw different opportunities and did something really smart. Cynical, but smart. Not only did they get s. 230 adopted in the USA and copied elsewhere, they also managed to implant in people’s minds the idea that the absence of regulation was the same as “freedom”.  Any attempt to regulate the  internet  (meaning them or their businesses) was portrayed as an actual or potential attack on  “freedom”.  Politicians  and judges stepped back. Unsure of themselves. In truth the absence of regulation was just another way of creating room to make more cash.

After the money came the totalitarians. They learned a lot from what they observed elsewhere. In particular they learned from surveillance capitalism. Often the very same companies and engineers that helped Palo Alto were now helping Pyonyang.

Meanwhile we have a UN body called the Internet Governance Forum which, since 2006,  has pretended to have some influence on matters of the kind discussed here. I predict it is not long for this world.  It has been coming for a while. 6th January sealed its fate. That’s a shame in many ways because the Forum has great strongpoints.

Mozilla’s plans to encrypt DNS queries in Firefox 

What has the main argument I am making in this blog got to do with child protection? Everything. If you doubt that just read a consultation document published by Mozilla. In particular look at this sentence:

“Numerous ISPs today provide opt-in filtering control services, and  (we intend) to respect those controls where users have opted into them.”  (emphasis added).

To put that slightly differently,  Mozilla has decided not to “respect” those controls where users have not “opted into them”.

A self-appointed techno-priesthood  has decreed that one approach to child protection is acceptable and another is not. Can I resist pointing out Mozilla’s global HQ is in a place called “Mountain View”? No I cannot. I do so as a service for those wondering where the latterday Olympus is to be found.

Inertia is at the root of many evils in the internet space, particularly among the less literate and  less knowledgeable, people who are often also among the most vulnerable e.g. children. Whatever an individual ISP may have lawfully decided to do, Mozilla seem to be willing to expose children to the risk of harm unless and until their parents get their act together and choose to opt in to protective filters. Wrong answer.  By a mile.

Mozilla’s  consultation document was written before 6th January, 2021. What it truly shows is Zeus needs to go back to the drawing board.

Posted in Internet governance, Regulation, Self-regulation | Leave a comment

Absurdities and atrocities

Voltaire famously said “Those who can make you believe absurdities, can make you commit atrocities.” History is littered with examples of this and last week in the USA we saw the same thing played out again.

Lies and travelling trousers

In the age of the internet never was it more true that a lie can be half way around the world before the truth has got its trousers on. The more fanciful or ridiculous (absurd) the lie the faster it is likely to spread through social media platforms. Eyeballs mean money and money is the name of their game.

We need urgently to get over the initial, marvellous hippy notion that in the internet we created something that enables everyone to be a publisher, a journalist, a doughty warrior concerned only to make the world a better place. That is true, we have.

But it is now abundantly clear we have also created something  which threatens that very idea. Last week was the proof, played out on TV.

How far are we willing to go to defend the world that emerged from and through the post-War settlement? The fate of Weimar should not be forgotten or what followed.

Oh the irony!

I am not the first person to note or comment on the irony. Governments have threatened to regulate social media platforms but now we see social media platforms doing something that looks very like regulating Governments.

Of course in a narrow way you could argue depriving Trump of his Twitter account or banning him from Facebook until he ceases to be President is not directly regulating a Government as such, but it is so close you would be hard-pressed to insert a Rizzla paper between the two.

Too little too late

Obviously, I approve of what Twitter and Facebook did but that isn’t the point. One might ask why they didn’t do it a lot sooner. But the larger questions are how it ever came to this in the first place and could it happen again?

Trump and his cronies incited the mob in an assault on democracy, but he and they could only get to a point where that was possible because social media platforms and elements of the mainstream media helped build him up. The USA is now on a national alert because of fears similar acts will be repeated in State Capitols on 20th January. Inauguration Day.

The intimacy, immediacy and scale of the internet made a “post-truth” society possible. We have had lying politicians and lying campaigners before, but  in modern times we have never had lying politicians or campaigners who had the financial backing and tools such as the internet, with its handmaiden, profiling, to enable dangerous demagogues to reach and manipulate, the lumpen, the alienated-dispossessed, the angry and the frightened.

Preserving liberal values is about preserving decency

So, yes, of course, we should have serious discussions about what free speech means in the age of the internet but if liberal democracy and liberal values are threatened where does it say we must stand by and let them die because we are paralyzed by anxiety and by laws drawn up for entirely different times? We need a legal framework which comprehends and embraces life in the early 21st Century. And we need it sooner, not later.

Posted in Facebook, Google, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

We need to treat Treaties seriously

The North American Free Trade Association (NAFTA) was a free trade agreement between the USA, Canada and Mexico. It became operative in 1994, although free trade between two of the three, Canada and the USA, had existed since 1989.

In 2020 NAFTA was updated by the US-Mexico-Canada Agreement (USMCA). One of the principal differences between NAFTA and USMCA was the inclusion in USMCA of a raft of provisions which set limits on what any of the three countries can do in respect of policies impacting on the internet. In 1989 and 1994 the internet was still in its infancy, a long way from the massive and pervasive presence it has today.

USMCA is not an agreement made by three parties of equal status and power

Lest anyone runs a way with the idea the USA felt obliged to agree to conditions advanced by the Canadians or the Mexicans following discussions between equals, even the most cursory glance at the terms of the deal makes it clear the USA was calling all the key shots. This is not surprising given the size and value of their economy.

Here are some explanatory extracts from the article linked to above

“Unfortunately, not everyone is free to experience the full benefits of the internet age, because so many national governments restrict access to online services and websites…. (emphasis added).

To counter this trend, the USMCA prohibits a broad range of digital trade restrictions…. (ditto)

The agreement’s most important digital trade provisions enshrine policies essential to the effectiveness and operation of the global internet….

Another essential policy mandated by the USMCA is liability protection for online intermediaries. It may seem arcane and technical, but liability protection is a core policy of the laissez-faire approach that enabled the digital revolution to occur in the first place. In the United States, laws like the Digital Millennium Copyright Act and Section 230 of the Communications Decency Act prevent website operators from being held liable for the conduct of users on their sites….. (ditto)

……. liability protections are a key reason why the world’s most successful and innovative internet companies were built in America. (ditto)

The authors of the article did not add, but I will, liability protection is a major reason why the internet is the way it is today. Think “move fast and break things.”

Digital businesses were given unique privileges.  At the time the rules were being set the technology itself and the likely dynamics of the way the internet would develop were poorly understood by policy makers and judges. Thus companies were not incentivised to root out and prevent harm. On the contrary they were incentivised to wrap themselves up and remain inert in the warm blanket of immunity.

Are you getting the picture? So to speak

How did all these clauses and provisions get included in USMCA? Big Tech lobbied for them.  By that I mean Silicon Valley. American Big Tech. They wanted to export the American way, their way.

A more naked statement of self-interest it would be hard to find. Never mind what ideas these pesky “national governments” might get into their heads. “It’s our way or the highway” was the message. US companies were particularly concerned to get certain clauses in USMCA because the Canadian Government and the Canadian Supreme Court in particular were starting to get their act together and be more assertive. They needed putting back in their box.

Nice work if you can get it

Do I blame American tech companies for seeking to further their own interests? No. I do not. Why wouldn’t they? I might have a slight complaint that very often the lobbying done on their behalf was  executed by or through well-funded intermediaries like trade associations because the individual firms did not want to become identified in the public eye with such stark expressions of commercial self-interest and/or nationalist sentiment while, because of a shortage of resources, children’s organizations were not able to mount an effective counter lobby. But hey! Whoever said life was fair? The playing field is not level across a vast acreage, not just here.

So this Wednesday in the UK Parliament

Following the publication last month of the UK Government’s  final response to the consultation on online harms, the UK is about to embark upon its own online regulatory odyssey. I will be writing about that odyssey in due course, meaning soon, but let me say now there is much in what the Government is proposing which I heartily welcome. And a great deal of it would be completely nullified or circumscribed if US interests managed to do a repeat performance in respect of the UK-US trade negotiations currently underway. Our world-leading Age Appropriate Design Code would also be threatened.

We must not let that happen which is why, in the House of Lords on Wednesday 6th January 2021, it is very much to be hoped that as many Peers as possible  will vote in favour of the cross-party amendment standing in the name of Baroness Kidron, Lord Stevenson of Balmacara, Lord Clement-Jones and Lord Sheikh. A big majority in the Lords will encourage  Members in the Commons to get behind it when it reaches them.

PS Just in case it isn’t obvious, similar considerations would apply to any trading bloc or individual country that enters into discussions with the USA about a free trade agreement.

Posted in Age verification, Default settings, Internet governance, Regulation, Self-regulation | Leave a comment

A very bad day for children in Europe

If you live in an EU Member State and you have used Facebook Messenger or Instagram Direct today you probably saw this message. “Some features are not available. This is to respect new rules for messaging services in Europe. We’re working to bring them back.”

This cryptic statement refers, among other things, to the fact Facebook have turned off their proactive child protection tools in every EU Member State.  This is because today the provisions of the European Electronic Communications Code (EECC) kick in.

But Microsoft, Google, Linkedin, Roblox and Yubo somehow managed to find a way to carry on with the tools. Well done them.

Given Facebook is the world’s largest source of csam and other child sexual exploitation materials reported to NCMEC and law enforcement, this is unbelievably disappointing.

This should never have happened in the first place BUT

OK, we should never have got into this position but where there is a will there is a way. Obviously with the five companies I just named there was a will to carry on protecting children by continuing to use the tools. They did find areas of doubt sufficient to justify a continuation. Facebook didn’t.

Facebook is usually not slow to act when an important commercial interest is threatened. Not here. Facebook rolled over.

Facebook is trying to reshape its image

Facebook is determined to appease and reach out to the privacy lobby. That is plainly an overriding corporate objective that trumps all others. Given the company’s previous lack of care and respect for their users’ privacy it is not hard to work out why they want to reposition themselves  in this way.

But children are paying the price for their inglorious corporate history.

Until this is put right – as it surely will be – how many pictures of children being sexually abused will continue to circulate on the internet? How many paedophiles will manage to connect with children?  How many existing victims will be harmed further, and how many new victims will there be? We will never know, but is unlikely to be zero.

Does Facebook really still have a Safety Advisory Board? Were they consulted about this, if so when and what did they say?

The anti-suicide and self-harm tools?

What about the tools which try to detect a child contemplating suicide or self-harm? Have they also been suspended? Maybe they haven’t but essentially they work in the same way as the anti-grooming tools and the classifiers used to detect possible csam. Facebook should put out a statement specifically commenting on that point.

Concrete results

Last month NCMEC published a letter  to MEPs in which they gave some hard numbers.

In 2019  NCMEC received 16.9 million reports referencing 69 million items of csam or child sexual exploitation. Of these “over 3 million” originated in the EU. That is 4% of the total, or about 250,000 per month. 95% of these reports came from services affected by the  EECC.  From these reports 200 children living in Germany were identified, as were 70 children living in Holland. In the same letter we see the 2020 numbers are going to be higher.

Knowing Facebook accounts for the great majority of reports to which the NCMEC letter refers, we can see the likely dimensions of what Facebook have done.

Shame on Facebook. Let’s hope they succeed in “bringing them back” as soon as possible. Then they can announce they are dropping or modifying their plans to encrypt the very same services.

UK exempt?

Why do the tools continue in use in the UK?  It seems because we adopted laws at national level which provide a good enough legal basis. Can it really be the case that no other Member State did the same? And if one or more did how can Facebook justify cutting them off?

This has been a bad day for children in Europe.

We are heading for a strange world

Privacy laws were never intended to make it easier for paedophiles to connect with children. They were never intended to make it easier for pictures of children being raped to be stored or circulated online. And it would be a strange world indeed if that is where we are heading.

If there truly is a legal problem here it cannot be one of substance. It can only have arisen because various bureaucrats and lawyers did not get all their ducks in a row and take all the right steps at the right time.

Instead of a brave stance in defence of children, Facebook has buckled in front of the remediable incompetence of others.

Posted in Child abuse images, Default settings, Facebook, Google, Microsoft, Privacy, Regulation, Self-regulation | 1 Comment

A new industry award

On this crucial day for children, as the EU’s “trilogue” meets to decide the fate of proactive child protection tools within the 27 Member States, I have decided to inaugurate a new annual award.

I think I will call it “The Techno Chutzpah Oscar” but if anyone can come up with a better name please let me know.

The Oscar will go to the company that most transparently and egregiously behaves or speaks hypocritically in the context of online child protection. And I have no hesitation naming Facebook the inaugural winner.

Here is an extract from the New York Times of 4th December 2020

“Facebook, the most prolific reporter of child sexual abuse imagery worldwide, said it would stop proactive scanning entirely in the E.U. if the regulation took effect. In an email, Antigone Davis, Facebook’s global head of safety, said the company was “concerned that the new rules as written today would limit our ability to prevent, detect and respond to harm,” but said it was “committed to complying with the updated privacy laws.” ( emphasis added)

This statement came from the company that normally goes straight to court when it decides it doesn’t like something a Government has said or done. Such legal actions sometimes win, sometime lose, but they almost always delay. Why not here? Why the immediate collapse without so much as the whiff of a writ?

Instead of  Facebook saying it is

“committed to complying with the updated privacy laws” 

could we not have heard the following?

“We saw the Opinion of the EDPS and we think it is rubbish. Facebook  believes there is a clear and firm legal basis which supports our use of proactive child protection tools. Our lawyers wouldn’t have let us deploy them in the first place were it otherwise. This legal basis is established under a variety of international legal instruments. In fact we would go further and say we believe we have both a legal and a moral obligation to use the best available means to protect children. We will vigorously defend that position in court should it prove necessary.”

But maybe the more obvious point, the one that gets them over the line and justifies the award of the first ever Techno Chutzpah Oscar is, lest we forget, Facebook  is the company that has acknowledged it has gigantic quantities of child sex abuse imagery being exchanged using its platforms but, nevertheless, still intends to encrypt the very services the new EU privacy law affects, if it remains unaltered.

If Facebook goes ahead with end-to-end encrytion in the way they have said, what happens with the EU law will not matter, at least not within EU Member States, because none of the tools will be able to penetrate the encrytion anyway.

Am I being unkind and cyncial? Was Facebook merely striking a pose to try to encourage the EU to do the right thing, in part because they have already decided internally to abandon end to end encryption? Answers on a postcard please to the usual address.

Posted in Child abuse images, E-commerce, Privacy, Regulation, Self-regulation | Leave a comment

Half a pat on the back

Thanks to tremendous lobbying and campaigning work by children’s organizations from across the world we have won the first part of what we wanted to achieve.

LIBE says “yes”

MEPs were tremendously impressed by the breadth and scale of support there was for the positions we took up on the Commission’s proposed derogation. It strengthened the hands of our friends in the European Parliament and hugely weakened our opponents.

The LIBE Committee today voted to put forward a report to the plenary meeting of the European Parliament next week. That means it should be possible for the Trialogue to meet and decide the matter in time to beat the 20th December deadline.

Here is the press release of the Child Rights Intergroup welcoming the decision.

But we can only give ourselves half a pat. There is more still to be done.

I say this because from the press release issued by the Parliament and from other reports, there are bits of what the Committee appear to have agreed which could still derail us.

No interruption or suspension of the tools

Conditions or riders have been attached to the continued use of the tools. If that means the tools are suspended for any period of time these conditions or riders must be resisted.

The conditions and riders are about transparency, accountability and reporting. These are things children’s groups should be very strongly in favour of but, at this late stage, to say they must be sorted out as a condition of continuing to use the tools seems utterly wrong.

So my suggestion is, over the next few days and into next week, we continue to lobby MEPs and national Governments – particularly the German Government – saying something along these lines:

  1. It is vital the Trialogue completes its work ahead of the 20th December deadline.
  2. We are concerned, however, that, even if the Trialogue does complete its work in time, if the LIBE decision is followed in total the use of the tools may be made conditional on terms that almost certainly cannot be met within such a short timescale.
  3. We have no problem or objection to stipulations about accountability, transparency or reporting mechanisms attaching to the continued use of the tools by companies. On the contrary we welcome them, but the only reasonable course of action is to allow these matters to be resolved during the period of grace which the derogation will establish or as part of the longer term strategy if that is adopted during the period of grace.

We can then turn our attention to what happens during the period grace and, above all, we can start to focus on working out what a long term policy will look like.

Here is NCMEC’s statement which also discusses that point.

Posted in Child abuse images, Privacy, Regulation, Self-regulation | 2 Comments

The questions to be asked in Brussels

Crunch time approaches in Brussels. Members of the LIBE Committee and later the plenary need to focus on the following questions:

  1. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to make it impossible for tech companies to prevent their customers from publishing, exchanging or storing images (still pictures or videos) of children being raped?
  2. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to prevent or delay the identification and removal from public view of images of children being raped?
  3. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to make it easy for sexual predators to locate and engage with children?
  4. When the GDPR was making its way through the European Instistitions do you think the co-legislators expressly intended to prevent companies from trying to identify children who might be contemplating suicide or self-harm so as to divert them from that path?

I believe the answer to all of these questions is a simple, unqualified “no”.

Are there ways of deploying the kinds of child protection tools referred to which are entirely and unequivocally compliant with the highest privacy standards?

I believe the answer to that question is a simple, unqualified “yes”.

So now I am an MEP

Let’s say I am a Member of the LIBE Committee,  from Poland or Ireland – I am  an Irish citizen and I could become a Polish citizen. I am 100% in favour of protecting children to the greatest extent possible.  But what do I see?

A lack of transparency and safeguards

I have no evidence any company has behaved inappropriately or put anyone in danger, child or adult, when processsing data that might be associated with the deployment of child protection tools. All I have is a deeply rooted suspicion. Call it a hunch.

This deeply rooted suspicion was allowed to take hold and flourish because there is no trusted transparency regime with associated safeguards and metrics emanating from accountable public sources which could assure me all is well.

I am asked to take everything on trust. That is wrong. No other word for it and it must be addressed in the forthcoming Digital Services Act. But what do I do in the meantime?

Two wrongs do not make a right

I look at how poorly some individual Member States have responded to the child protection challenge, as evidenced, for example, by their failure to implement fully the terms of the 2011 EU Directive  but also by their failure to act more broadly in society at large where the bulk of child sex abuse and threats to children occur.

I conclude the national politicians responsible, maybe even in my own Party, are only paying lip service to the idea of protecting children.

I look at the patchy engagement of some law enforcement agencies.

I look at how the different child protection tools we are discussing have emerged from private tech companies, starting back in 2009, and finally I look at what I think are the failures of Commission officials and Member States to address all these things satisfactorily up to now.

I might even reflect on my own responsibility here. This is not my first term as an MEP.

But still. What do I do?

Having looked at what I  believe is a series of process failures and other shortcomings do I then decide my higher duty is to those processes? Do I vote to bring an end to the tools?  Even for a short while until the mess is sorted out and all the procedural ducks are in a neat, bureaucratically satisfying row?  Should I vote to throw out the Commission’s interim proposal? Should I refuse to give children the benefit of the doubt?

Absolutely not

Posted in Child abuse images, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

Privacy warriors arrive late

Governments and legislators stood by and watched for years while the internet exploded, bringing in its wake huge benefits but also several downsides, particularly for children.

“Permissionless Innovation” was the watchword. We even created special legal immunities to help things along, the idea being new stuff would be tried out around the edge of the network without anybody having to sign a form in triplicate, get a green light from “higher up” or worry about a writ or subpoena. This created a reckless culture which only now is beginning to be addressed in every major democracy. In the case of the EU this will be through the Digital Services Act.

Innovation under attack

Against this historical background, pardon me if a wry smile passes my lips when I hear the anti-grooming programmes, classifiers and hash databases being attacked. These are examples of innovation. These are examples of techies trying to find better ways of doing things, in this case keeping children safe. The very opposite of reckless. As Microsoft’s Affidavit attests, these tools are not supersmart tricks designed to make more money for whoever deploys them although given the history of Big Tech the suspicion that they might be is completely understandable.

And who is attacking the innovative child protection tools just mentioned? Not people who are habitués of platforms where children’s rights and safety are discussed.  Most of the attackers are substantially identified with completely different agendas, principally the privacy agenda.

Of course everybody is entitled to an opinion but if some of us who regularly plough the furrow of children’s rights and safety seem confused as to precisely why these privacy warriors are suddenly taking a deep interest in children, I hope they will not take it personally and understand why.

Is this what the drafters of the GDPR intended?

When passing the GDPR did the European institutions expressly intend to make it difficult to detect and delete images of children being raped? Did they knowingly plan to make it easier for a paedophile to contact a child?

No. The very idea is absurd

So if there is any legal basis at all for the critics’ arguments about proactive child protection tools, and I do not believe there is, it arises solely as an unanticipated, unintended consequence of a set of rules drafted principally for other purposes.

We need politicians to fix that problem, not manipulate or take advantage of it.

A collective mea culpa

If we had already constructed a transparency and accountability regime in which we all had confidence I doubt these issues would even be being discussed.  But we haven’t. For this we are all to blame, in varying degrees. The answer is to get on with building that regime not risk putting children in harm’s way.

I am certain much common ground could be found if we were not immersed in the unwanted, pressured environment the current, highly unusual circumstances created.

We shouldn’t confuse jurisprudence with politics

As in all things there will be issues of balance and proportionality but in Europe aren’t these, essentially, jurisprudential questions to be determined in accordance with, for example, the European Convention on Human Rights, the EU’s Charter of Fundamental Rights and case law?  Should I add the UN Convention on the Rights of the Child and the Lanzarote Convention, to which every EU Member State has signed up? You decide.

Politicians should not take it upon themselves to say “we cannot do this or that because it is illegal or we must do the other because the law requires it” if all that amounts to is using the law as a cover for politics, or as a way of dodging responsibility for something you know could otherwise be unpopular.

The institutions will not allow laws to pass which ex facie are illegal. And if they do, neutral judges will resolve things.

Zero evidence of harm. Tons of evidence of good

Where is the evidence the use of anti-grooming tools, classifiers or hash databases has harmed anyone? There isn’t any.

But we have lots of evidence of the good the tools are doing.


Look at the number of csam reports being processed by NCMEC and how many of these resolve to  offenders in EU Member States: 3 million in 2019 and until 1st October 2020 2.3 million. 95% of these were derived from messaging, chat and email services. 200 children in Germany were identified. 70 children in The Netherlands. And there is more of this kind of information available country by country.


Look at the concrete evidence showing how anti-grooming tools are protecting children in Europe. And the classifiers work in a similar way.

Between 1st January 2020 and 30th September NCMEC received 1,020 reports relating to the grooming and online enticement of children for sexual acts where these reports resolved to EU Member States.

905 were the result of reports made by the companies themselves, generated by their own use of tools. Only 105 were the result of manual reports by the public. 361 reports came from chat or messaging apps.  376 came from social media.  These led to action to save one or more children in Belgium, France, Germany, Hungary, The Netherlands and Poland. Tell me again why we should junk the tools?

Human review is an integral part of all the processes

There is always human review before any action is taken on something that is flagged by a classifier or an anti-grooming tool. Relying only on keywords is absolutely not what is happening. Context can be vital. But the tools do not comprehend, analyse, record or keep conversations or messages. They pick up on signs which are known to point to perils for kids. No signs. No action. Nothing happens. Just like sniffer dogs at airports.

And by the way, no image goes into a hash database of csam without it first having been reviewed, normally by at least three sets of human eyes. It does not need to be looked at again after that before it goes to law enforcement or before the image is taken down. That defeats the whole point of automating this part of the process. Among other things, don’t we want to minimise the number of times individuals look at things like that? Yes we do.


Posted in Child abuse images, Default settings, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized | 2 Comments

The wisdom of Max Schrems

I met Max Schrems at a seminar in a law school in the USA last year. He opened his remarks by saying in preparing his comments for the seminar he tried to talk to lawyers in the privacy community who specialised in or knew about children’s rights in the context of privacy law. What he said was “I couldn’t find anyone” or “there weren’t that many”. 

In part what we are seeing  in the current debacle in Brussels is a product of that. The privacy community is largely a stranger to the world of online child protection. That must change, and soon.

Here is my brief summary of yesterday’s meeting of LIBE followed by a few observations.


There is a lot of support for the temporary derogation but, as things stand, it may not be enough to get us over a satisfactory line. We need to keep lobbying.

There are still some worrying misconceptions and misunderstandings kicking around. Unless they are addressed they could sink the tools by making them useless.

Very restrictive

The lead Rapporteur, Birgit Sippel, seems happy to allow tools to continue to be deployed for up to two years providing they only identify material classed as “child pornography” within the meaning of Article 2 of the 2011 Directive.

I believe that would kill off classifiers and the anti-grooming tools. This must be resisted but I think, in part, some people’s doubts are based on a fundamental misconception in relation to how the technologies work (see below).

More problematic is Ms Sippel’s suggestion that nothing is reported to the police unless there has been prior human review. That defeats the whole point of automated proactive systems.  The numbers are just too big. That’s precisely why these tools were developed.

What is essential is that there is an exceptionally low error rate. Professor Hany Farid says PhotoDNA works with an error rate of around  one in a billion or less.

I don’t have a problem with Ms Sippel’s ideas around digital impact assessments, consultations or evaluations of the software, on the contrary they sound great, but they cannot be made conditions precedent because that, in effect, means halting everything until goodness knows when.

And the issue about data transferring to the USA could also be another serious obstacle.

Privacy as a barrier to child protection? No.

We want privacy to protect our health and medical records, to stop companies sneakily snooping on us so they can sell us more stuff,  we want it to protect our banking transactions, our national infrastructure, to force companies to take stronger measures to prevent hackers getting our personal data and, yes, to stop unwarranted invasions of our private lives and communications by the state and other actors, bad or otherwise.

But look at Facebook’s announcement last week. Children in all parts of the world were benefitting from protections Facebook had implemented to detect threatened suicides and self-harm. Everywhere in the world except the  EU. Done in the name of privacy.

Now it seems, also in the name of privacy, tools could be banned which help keep paedophiles away from our children or which help the victims of child rape regain their human dignity by claiming their right to privacy.

Not understood the technology

At LIBE there were several references to “scanning everybody’s messages”. That is not what is happening with any of the tools we are trying to preserve.

When we used to go to airports, dogs would walk around sniffing lots of people’s luggage searching for drugs and other contraband. The machines airport staff put our luggage through do something similar with x-rays. When we post letters or parcels the Post Office or the carrier employs a range of  devices trying to detect illegal items that might be in any of the envelopes or packages they are planning to deliver for us or to us.

Are the airport authorities or the postal services“scanning” everybody’s mail or luggage? No. At least not in any meaningful sense.

The child protection tools we are discussing are like the dogs at the airport, the luggage X-ray machines, or the devices in the Post Office sorting room.

They are looking for solid signs of illegal content or behaviours which threaten children. No sign. No action.

Could the tools be misused?

Could scanning tools be misused for other purposes? Yes they could. How we address that and reassure ourselves it is not happening is important but the tools we have been discussing have been in use, in some cases, for over ten years and we have ample evidence they are doing a good job. We have zero evidence they are doing a bad job.

Who would want to stop them doing that good job just because a variety of bureaucrats didn’t do theirs when they should?  That is what this boils down to.

We have to find a way to allow the tools to carry on while we construct a durable, long-term legal basis and oversight and transparency regime.

Those who claim protecting children in the way these tools can do is “disproportionate”  should recall that proportionality, like beauty, is in the eye of the beholder. And in every legal instrument I know we are told children require special care and attention because they are children.


Posted in Child abuse images, Default settings, E-commerce, Privacy, Regulation, Self-regulation | Tagged | 2 Comments