A postcript on the encryption misrepresentation

In the latest “New Scientist” (page 11 of the print version) this heading caught my eye

“Spies may be gathering encrypted data to crack with a future quantum computer”.

It seems even though today’s 007s (and doubtless others who swim in the same waters) cannot crack certain types of strong encryption they believe one day they will. In anticipation of that moment they are “harvesting” (stealing) encrypted messages and squirreling them away.

Quantum computers could be the tool that provides the key. Usable quantum machines might be here soon. Very soon. Many of the commercial secrets, personal transgressions and goodness knows what would then become viewable and might still have some value or at any rate have the potential to cause embarrassment, loss or create danger.

I have already written about the fraudulent privacy promise being made to promote  the wider use of strong encryption but having read the New Scientist piece I thought I had better take up my virtual pen once more because here is an additional angle.

Strong encryption is being promoted as a guarantee of safe passage to whistleblowers. A subversive weapon to topple tyrants etc. You know the drill. 

I acknowledge that the use of strong encryption could provide an incremental increase in privacy, but the word “incremental” is an important qualification. It is very different from “complete” or “absolute”.

For those individuals and platforms seeking to persuade us that the wider use of strong encryption will lead inexorably to the sunny uplands, in my last blog I suggested they should try to be a bit more honest and transparent. I even suggested a strapline which I would now like to amend.

From now on all marketing materials, public statements or advertising in this space  should say something like this

“Strong encryption will provide you with a bit more privacy but don’t forget when you are online you are never really private. And pretty soon as tech advances some people will be able to read your messages anyway. You have been warned.”

It seems boffins are “working on” algorithms that are safe from quantum computers but “working on” is not the same as “have developed”.  Moreover, who is to say whether or to what extent such a solution, if it were to emerge, could and would be applied retrospectively, reaching into every pocket where the stolen data were being stored?

So my last point is a rather obvious one: when people planning to introduce strong encryption to their platforms tell you they recognise strong encryption will mean they are wilfully depriving themselves of the ability to protect children but they are doing so in order to give end users greater privacy, they are not telling the truth, the whole truth and nothing but the truth. They are spinning a line.

 

 

Posted in Internet governance, Privacy, Regulation, Self-regulation, Uncategorized | Leave a comment

Child protection delayed is child protection denied

In December 2020, one second after midnight when the new provisions of the European Electronic Communications Code kicked in, Facebook stopped looking for child sex abuse material (csam) across its platforms in all 27 EU Member States. 

Historically, a huge proportion of the reports of csam found in the EU originated on one or other of Facebook’s properties. The company’s decision therefore had immediate, catastrophic and entirely predictable consequences. 


The above graphic was prepared by NCMEC, the principal global centre for receiving reports of csam. 

It shows that in the first seven months of this year over half a million reports that could have been made, weren’t made. A 76% decrease. In fact the real impact of the decision was almost certainly greater because, as we know from law enforcement agencies around the world, Covid-related lockdowns had led to general increases in illegal activity of this sort during that period.

ECPAT International first broke this story via Politico Pro (paywall). In her excellent blog Acting-ED Dr. Dorothea Czarnecki commented

Every unreported image represents a child potentially in imminent danger of being sexually abused again and often it will be the image of a child who needs to be found and helped now. 

Facebook’s decision was baloney

Let’s not forget lots of other companies e.g. Google and Microsoft, concluded there was no need to stop what they were doing, and they were doing exactly the same as Facebook. If Google and Microsoft could get a legal opinion to support them carrying on, so could Facebook. Self-evidently they had no desire so to do.

One is therefore bound to wonder what exactly motivated Facebook to take such an extraordinary step. Any idea of sticking with “industry standards”  – a line the company frequently pushes – had obviously been thrown out of the window. Something else was going on. But what?  My sense of puzzlement was not dimished when….

The story kept changing

After Facebook stopped scanning for csam, I spoke to various people in the company. They were adamant. As soon as “the situation” was clarified scanning would start again. Arguably “the situation” was cleared up on 29th  April 2021 when a political agreement was reached. An “interim derogation” would be introduced to restore much or all of what had been understood to be the status quo ante.

Facebook was never in any material legal jeopardy but surely once the political horizon was clear they could resume? They could but they didn’t.

The message coming out of Facebook changed.

They said they now had to wait until the new law was not only politically finalised, they also needed to see the legal text. This meant waiting until the last dots and commas had been inserted and it had been published in the Official Journal of the EU.

The first happened on 14th July,  the second on 30th July (ibid). Yet here we are.

Then the line switched once more

Now I was told it was not, after all, just a matter of seeing the final version of the legal text as published in the Official Journal because

“It’s not as easy as flipping a switch. It’s with engineering.  They are sorting it out. But we will be going again soon.”

A minor digression

In Frances Haugen’s testimony to the US Congress last week we learned when the 2020 US Presidential elections were over Facebook did just “flip a switch” to restore the status quo ante. 

This is what Haugen said

“… as soon as the election was over, they turned them [the safety systems] back off or they changed the settings back to what they were before, to prioritise growth over safety. And that really feels like a betrayal of democracy to me.”

In other words Facebook intentionally allowed the craziness to begin again. What does craziness mean on Facebook? It means eye balls. And what do eye balls mean? Money.

The line changes once more

Back to the main theme of this blog. When ECPAT went public the journalist on Politico Pro contacted Facebook for a comment. This was when we discovered the company was “consulting” the Irish Data Protection Commission.

So it wasn’t just an engineering question, if it ever was, and it was never only about seeing the legal text or the difficulties of switch flipping.

Given the alacrity and chronological precision with which Facebook stopped scanning, their present tardiness is utterly shameful. More evidence of the company’s detachment from reality, their arrogance. Or is Facebook on manouvres?

Rumours, rumours and speculation

The rumour mill is alive with speculation. How can we explain what otherwise seems to be another wholly avoidable, gratuitous self-inflicted wound?

Everyone I know assumes Facebook’s actions must be linked to their plans to introduce end-to-end encryption. If that happens the amount of csam finding its way to NCMEC from Facebook will plummet to zero or very close.  The number of children being abused via the platform won’t change. If anything likely it will increase because perpetrators will be emboldened to do more.

Nevertheless, what would make the transition to end-to-end encryption so much easier for Facebook is for the Irish Data Protection Commission to find in favour of Patrick Breyer’s complaint, originally lodged on 28th October, 2020 with the data privacy authority in Schleswig-Holstein and later referred to Dublin.

If all or the bulk of Breyer’s petition is upheld resuming scanning on Facebook  might never happen or it could restart in a much reduced form. The decision to introduce end-to -end encryption, at least from a child protection point of view, would be a big non-event and Facebook will be able to say “don’t blame us”. 

And if such is the ruling that comes out of Dublin everyone else will have to stop  or change the way they try to protect children. It can’t come to that. Can it? 

Posted in Child abuse images, Facebook, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized | Leave a comment

A note about privacy

Several media outlets that major on investigative journalism provide special ways for people to communicate with them.

 “Motherboard” is one such. It is part of an online media group called “Vice”. The name is a little misleading. As far as I can tell it has nothing at all to do with “vice”.

If someone has a story they want Motherboard’s journalists to look at they are invited to send it in. But Motherboard is aware that anonymity (for which also read “privacy” in this instance) may be a prerequisite for a whistleblower or someone with, er, delicate info.

The site therefore gives a would-be informant advice about how to go about concealing themselves.  Or should I say “how to try to conceal themselves”? Read on. Or read this if you really want to do a deeper dive using a different source.

The first thing that strikes you about the advice Motherboard offers is how detailed, extensive and complicated some of it is.

All this actually does is remind us how flimsy or insubstantial the whole online privacy promise is in practice, at least for the overwhelming majority of internet users. Maybe the the uber techies didn’t need reminding, but then…. 

Here are some choice extracts from Motherboard’s suggestions about how to proceed

Postal Mail

While it may sound old-fashioned, using postal mail is still one of the safest, most anonymous ways to send letters, documents or even thumb drives”. (emphasis added by me).

It goes on

“If you choose to send us postal mail, please do not include a return address and mail your letter in an envelope from a sidewalk mailbox, ideally on a corner you usually don’t go to often. Do not use a post office and don’t take your phone with you”.

Welcome back the Penny Post

So there you are. For all the technological progress we have made, sticking something in a pillar box is still highly commended.  Perhaps it’s the best as long as you remember to leave your phone on the mantlepiece and btw keep an eye out for CCTV.  Take care not to leave fingerprints or dna on the envelope or its contents. 

Then Motherboard says this

SecureDrop

(Our) SecureDrop can only be accessed through the Tor network, which allows you to surf the web without revealing your true location, IP address, or other information that could potentially reveal your identity. Your communications will be encrypted and using this system offers a higher degree of security over regular email” (ditto).

But be warned

Note that even when using SecureDrop, Motherboard nevertheless feels obliged to point out to the unwary

“this system offers (only) a higher degree of security over regular mail.”

Key point: Motherboard is not promising or guaranteeing privacy even this way. 

In fact they go on to say

“no system or technology can provide perfect security”.

Mark those words.

“Powerful adversaries”

What else does Motherboard have to say?

“Tor protects your identity from the sites you visit, but a powerful adversary might be able to correlate the timing of the source’s home usage of Tor, plus the timing of the leak” (ditto).

Who are these powerful adversaries? They are state actors and their security services, although several tech companies can match or outfox them as can a host of nerds and sophisticated hackers.

Citizens of North Korea, you have been warned. Put not your faith in Tor.

People’s lives have been destroyed by false promises of privacy

How many cases have we heard of where someone has committed suicide or cases where people’s lives were ruined because they believed the sloppy journalism of the early days – and the still continuing days – or they believed the carefully crafted marketing which promised or implied anonymity/privacy but never delivered? Too many. 

If everyone understood the internet was and in many places remains as leaky as a household colander there might be a little less misery in the world.

It really comes down to whether or not a bad actor can be bothered to pursue you. If they do and they have the resources and determination they’ll get you. That’s how I read it.

So when companies bang on about introducing end to end encryption to improve  privacy they risk creating or perpetuating a dangerous lie. A myth that can cost people their lives or their happiness.

Incremental steps 

Yes certain steps can make incremental improvements to privacy. End to end encryption may be one of them but this should be the advertising slogan 

“Get a bit more privacy but don’t forget  when you are online you are never really private anyway. You have been warned.”

That’s not going to help stampede people into buying the product.

The truth, the whole truth and nothing but the truth

So be clear: when Facebook and others talk about or imply there is a “trade off” between absolute, guaranteed, unbreakable privacy and protecting children, they are not telling the whole truth and nothing but the truth.  They are doing something else.

Increasingly I believe the move towards end to end encryption,  at least in part, is about Facebook’s  and others’ desire to limit or reduce the costs of moderation and inspection and to limit or reduce liability risks, either strictly legal ones or pr ones.

Corporate arrogance

Facebook believes it is so big and effective as an advertising platform, howsoever  individuals might feel, businesses just cannot afford not to advertise with them so the company really couldn’t give a flying fandango about the rest, or if they do it is in a very minor key. Hubris beckons.

In the week of a major global outage and in which a very public whistleblower revealed so many more unsavoury things about how the company behaves, I would start selling stocks in Clegg and Zuckerberg. Facebook seems to have become a permanent disaster zone. And it’s hurting kids.

Enough already. Maybe those who are rallying behind Facebook’s plans and attacking Apple’s would like to think again. Or if that is asking too much, could they at least modify their language and abandon Manichean metaphors?

Posted in Child abuse images, Facebook, Privacy, Regulation, Self-regulation | Leave a comment

A PS about scale, proactivity and not listening to the world

I reproduce below an extract from Facebook’s latest “Community Standards Enforcement Report”.  It is for the quarter ended June 2021. Very up to date. Published on 18th August it hardly needs saying there was no appreciable time lag between collecting the data and publishing them. This speaks of highly automated processes. Bravo.

I looked at the list of distinguished academics who, in 2019,  scrutinised the methods used by Facebook to generate these reports. The legitimacy of doing things this way is a discussion for another day. Suffice to say I doubt an arrangement like that will be  acceptable in a more regulated future.

What is  astonishing though not surprising ( is it ok to say that?) about Facebook’s document is the  picture it paints, by which I mean the  diversity and sheer volume of activity described.

Proactive rate

Moreover the company makes much of its “proactive rate” which, it explains, refers to actions taken by them to remove material “before a user reported it to us”.  

In other words, while Facebook has systems to allow individuals to report matters of concern, the report shows they don’t rely on them to get most of what they target.

Prohibited child related content

The figures on Covid misinformation, hate speech and self-harm are staggering but no less so than those for action taken against content prohibited on child safety grounds.

  • Child nudity and (child) physical abuse content…
    • On Facebook: 2.3 million with a proactive rate over 97%
    • On Instagram: 458,000 with a proactive rate of over 95%
  • Child sexual exploitation content…: 
    • On Facebook: 25.7 million with a proactive rate of over 99% 
    • On Instagram: 1.4 million with a proactive rate of over 96%

Bear two things in mind. These numbers refer to a single quarter. They are not a cumulative total for the year so far. And rather obviously they address items or activities which are currently viewable by the company.

Inside Mark Zuckerberg’s head

For more than one reason you have to wonder about who posts material of this kind to spaces which they know or ought to know can be seen either by everyone or at the very least by Facebook itself.

Zuckerberg’s declared intention to introduce encryption to major parts of the company’s services is likely to push the amount of illegal or prohibited activity taking place on his watch up, not down (and not just in respect of child related matters).

That being the case why does Zuckerberg think such a course of action is acceptable? 

Even if one believes people want more privacy, that is no reason to give it to them if you know, as night follows day, more children and others will be hurt if you do. People want all kinds of things that public policy forbids or limits because of their harmful effects.

On the other hand the decision makes sense if you are convinced such a move is essential in order to preserve your business as the dominant brand. That means the decision is about ego or money, probably both. It most definitely is not a decision made in pursuit of improving life on Earth. Zuckerberg personally will not be hurt by it but he is willing to allow others to be.  That is just not right.

Some cynics suggest  the move to encryption is all about reducing the costs of, and potential lilabilities in relation to, Facebook’s activities as a moderator. Not me. Oh no. I would never suggest such a thing. I’m not a cynic.

Enough already with the ad hominen attacks?

And just in case anybody was minded to accuse me of ad hominem attacks on the person of Mark Zuckerberg, please do not forget he holds a majority of the voting stock in the company. Every key decision Facebook makes is made by him. That’s what personalises it.

There is only one toga that counts at the Court of the Young Emperor.

Facebook can talk until the cows come home about the “preventative strategies” they intend to introduce to accompany the shift to encryption, about  how they intend to identify bad actors before they do bad acts (aren’t they already doing that anyway?), but the inescapable fact is, as things stand, once material goes into an encrypted tunnel it becomes invisible to them.

My hunch

My hunch is, some time ago, probably in the immediate aftermath of yet another privacy scandal involving the company, Zuckerberg decided the clamour for more privacy was likely to grow therefore, to preseve or increase his company’s revenues Facebook needed to embark on the famous/infamous “pivot to privacy.  As others have observed, after 15 years of prioritizing growth over privacy Facebook decided to switch tracks.

It’s a big gamble. A  big bet. But I think it will be shown to be a wrong one, either at large, as the mass of people cease to believe  meaningful privacy on the internet is possible anyway or in particular as it concerns Facebook.

Give a dog as bad name and it’s very hard to shake it off. This dog not only has a bad name when it comes to privacy it pretty much became a synonym for the lack of it.

Closing your ears

Which brings me to a story in today’s “Sunday Times”. It has a very familiar ring to it.  Seemingly, after “years of apologising” Zuckerberg has decided to stop. Apologising.

Here is what it says in the  article

“The company is now so used to getting bad press that they’re starting to not care….  They don’t like it, but they now think the whole world is against them and they’re retreating to their bunker.”

The New York Times reported last week …. Zuckerberg has agreed to take a step back from the stream of controversies, leaving human shields such as Sir Nick Clegg and Sheryl Sandberg to take the arrows.”

Bunkers? Let me think about that. 

Shields? That won’t work. It will only emphasise and underline that there is an organ grinder somewhere who is too scared or too arrogant to face his critics.

The company is now more valuable than it has ever been so if you believe the only thing that matters is the evidence of how popular your products are then you can see how this “up yours” strategy could emerge. This is another error of judgement and has more than a hint of Clegg about it, a man who is no stranger to major errors of judgement.

The only way to have a chance of getting actual or potential regulators off your back is to stop doing the stuff that attracts their attention. 

Posted in Facebook, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

Strong support for Apple’s breakthrough initiative

Apple’s announcement about how it intends to address the problem of child sex abuse material (csam) being distributed using its devices and services shows what can happen when a serious company puts serious resources behind trying to solve a serious problem. It also explains why so many children’s advocates from around the world have been rallying in support of the company. This is a rare thing.

A letter applauding Apple’s stance was coordinated by ECPAT International and, in a matter of days,  it attracted almost 100 signatures. Had there been time there would have been many more. But Apple is under attack so we felt we had to move fast.

Stand off in the Valley

It will not have escaped many people’s notice that Facebook had already declared it intended to do more or less the exact opposite of what Apple says it is now planning. And the only, or at any rate the first, major industry attack dog to criticse Apple’s plans was Will Cathcart of WhatsApp, a Facebook-owned company. The irony of the most privacy abusing company in the history of the internet, criticising the most privacy respecting one, on the grounds of privacy, sort of takes your breath away. Chutzpah thy name is Facebook. They should show a little humility.

But there is no threat to anyone’s privacy

Using a series of technical buffers all that Apple has done is create a means to spot csam. Nothing more. Nothing less. Think about a luggage scanning machine or a sniffer dog at an airport. Nobody’s suitcase or backpack gets opened without probable cause. Is that an abuse of privacy in any meaningful sense? Yet we all tolerate or even welcome it, maybe because we see an immediate benefit to ourselves as a soon-to-be passenger?

Here’s another analogy. Imagine you own a pair of glasses which only allow you to see red dots and you have no means or intention of recording anything anyway.

Billions upon billions of blue, green and yellow dots could pass in front of your eyes but it would be as if they were not there at all. Gone but not forgotten because they were never recorded to begin with. In this case the red dots are images of children being raped. These you check out before acting to delete them and reporting the apparent perpetrator.

What is wrong with that exactly? Is this a good thing to do or a bad  thing to do? If you believe it is good, do it. If you think it is bad, don’t. There’s no need to go hunting for hypotheticals to provide an alibi for inaction.

So how did we get here?

Historically, the amount of csam being found on the internet and reported was quite small. This left open the possibility that was because the amounts “out there”  were quite small. 

Even as the internet started to grow, the numbers remained remarkably low.

In most countries countable reports of csam are made to hotlines and while not every country has one large swathes of the world’s population are covered. Thus, the numbers reported to and by hotlines have been the best indicator we had.

Look at the 2019 report of INHOPE (the global association of hotlines – I am a member of its Advisory Board). This is what it says:

“…..  with capacity expansion in existing member hotlines…  ( we have seen) double.. the number of CSAM related images and videos processed… from 2017 to 2019.

Statistics from 2019: `

  • 183,788 reports were processed
  • 456,055 images and videos were assessed
  • 320,672 illegal images and videos were processed”

To explain these modest but nonetheless welcome numbers we need to look at several factors. Individually and therefore cumulatively these had a decisive influence on  producing the figures shown. 

  • In most jurisdictions it was (and still is) illegal to go proactively looking for csam so, in theory, pretty much every report made to all or most of  the hotlines whose efforts are reflected above was the result of someone “stumbling” across csam accidentally. 
  • Alternatively, perhaps  a would-be reporter  had been sent some csam on an unsolicited basis.  Either way the individual concerned then had to have or find the time, inclination and determination to discover how to make a report and proceed to do so successfully. It’s not hard to work out the weakness and limits of this approach.
  • If there was no mandatory reporting obligation on the firm that owned the server or service on which the csam was  located, any report  an individual might have made perhaps never found its way into the stats.
  • Yet without doubt the single biggest factor which explains the small number of reports reaching INHOPE  is the lack of proactivity. How do we know this? Because in another part of the same forest some hotlines are actually doing things differently, obtaining results which are orders of magnitude greater. See below. But first….

A children’s hagiography of the internet

A future history of the internet will identify a number of key events and players in the fight against csam.

  • Founded in 1996 the IWF  was one of the first hotlines in the world and the key mover in founding INHOPE. 
  • The IWF and BT  get top spot on this list of major global actors because working  co-operatively they devised a proactive method, Cleanfeed, for restricting access to places on the internet known to contain csam. This happened in 2004.  It was an enormously important precedent.
  • Even if the approach skirted the edges of legality the authorities made clear they would not authorise a prosecution because they did not see how it could be in the public interest to take to court anyone or anybody who was helping keep children safe.
  • Cleanfeed probably succeeded because, although emerging as a voluntary measure, it got strong backing from the Government of the day, police, the Crown Prosecution Service and it enjoyed all-Party support in Parliament.
  • The usual suspects were against it. They always are.
  • Fairly soon every UK ISP and every mobile phone company signed up and followed BT’s lead, as did providers of WiFi access in public spaces. They all took the IWF list of urls to be blocked. The practice spread at home and abroad.
  • The IWF also deserves special mention because of their muscular determination to make it possible for people in countries around the world to have a way of reporting csam even when their local population is too small to justify a full blown hotline.
  • Next  in the saintly series must come Microsoft and Professor Hany Farid for developing PhotoDNA. This happened in 2009. Along the same lines as the IWF list, it allowed for the construction of a database, a list, in this case containing unique digital fingerprints of known csam.  PhotoDNA made it easier and quicker to find, delete and report offending items. The numbers began to climb steeply.
  • The usual suspects were against it. They always are.
  • The Canadian Centre for Child Protection  then changed everything. First, they had the ingenious idea of linking the PhotoDNA database with a web crawler. In 2017 they set Project Arachnid  loose and in six weeks identified 5.1 million web pages containing 40,000 unique csam images. This was miles away from anything that had ever been seen in the public domain. We all began to get a better idea of just how big the csam problem really was.  The sense of urgency went up a beat.
  • The usual suspects were against it. They always are.
  • Next the Canadians did something else. In 2018, again for the first time ever, they made it their business to find, speak to, support, organize and help project the voice of victims of online csam. With the Phoenix 11 we heard from (now) adult women who had been sexually abused as children. They all knew pictures or videos of their abuse were still circulating on the internet.
  • Nobody could be left in any doubt about the on-going harm being done to these brave women by the knowledge that pictures of their most awful humiliation and pain remained accessible in cyberspace.
  • Equally nobody could doubt the authenticity of the victims’ anger at parts of Big Tech for failing to act in ways which had now been proven to be highly effective.
  • The  way  the victims’ saw it their rights to privacy and human dignity were reduced to nought in the face of opposition from people who thought so many other things counted for more than they did.
  • Last but by no means least, comes the US-based National Center for Missing and Exploited Children (NCMEC). Its hotline was founded in 1998  and has long been the benign and much respected 800lb gorilla in the global fight to combat csam. While to outsiders NCMEC’s freedom of action has often seemed to have been limited or circumscribed by bizarre and complex US Federal laws, they have stuck to their last and worked directly with companies and law enforcement to show what can be achieved. At scale.
  • It is through NCMEC’s work with willing businesses that  last year around 70 million individual items of child sex abuse were reported to them in over 21.7 million individual reports. And btw I’m writing this blog only days after reading of a case in the USA where one person was found in possession of over 8.5 millions images (he got 27 years jail time)

 With apologies to John Travolta and Olivia Newton John  “Scale is the word”.

Analogue thinking does not work in a digital world

Apple understands that scale is everything when dealing with csam.

What the usual suspects’ objections to Apple’s policy announcement come down to is this: Apple has devised a method which could be abused by being deployed for purposes other than the detection of csam.  This is so transparently without intellectual coherence it is barely worth taking seriously. Is this the new standard?

“Say ‘no’ to anything that might be used in a bad way?”   Isn’t it a bit late for that?

Is the whole of global Tech on hold in terms of protecting children until Kim Jong-un subscribes to the New York Times and promises not to be naughty any more?

Come in “permissionless innovation” your time is up

I am reminded of the mediaeval Catholic Church’s repeated attempts to stamp out avenues of  scientific and philosophical enquiry because, human frailty being what it was, they suspected it would lead to no good. Look how that worked out.

Rather than banning  research and development because it could be abused, we need  trusted governance. We need  transparency systems linked to sanctions which ensure companies do not allow the dollar signs to dazzle or drug them. 

If a dictator asks you to do something you don’t agree with, walk away. But, please, don’t ask me not to invent good stuff that can protect children against the possibility that you might succumb to the pull of the filthy lucre at some indeterminate point in the future.

If I was a dissident in North Korea or several other places I think pretty much the last thing I would do is go on any bit of the internet to further my plans to overthrow the regime. Samizdat is on its way back.  Tech is bringing us full circle.

The Venerable Alex Stamos speaks

Having earlier urged Facebook to show some humility  when addressing Apple on the matter of privacy, I hesitate to question the judgement of Alex Stamos.  He spoke about Apple’s apparent failure to build a “real trust and safety function” citing the absence of  a  “mechanism to report spam, death threats, hate speech, NCII or any other kinds of abuse”.  Hmmm.

I am sure these are all fine things  for an online business to have, and I would be delighted if Apple introduced them, but they are not a substitute for automated systems which can work at scale to protect children. 

As NCMEC makes clear in the report cited earlier, of the 21.7 million reports it received, 21.4 million (99%)  came from companies and of these it is understood the vast majority were generated by automated systems. Untouched by human hands. Unseen by human eyes (for which we should all be thankful).

In the case of NCMEC itself,  only 1% of  all the reports it received came  from members of the public via a  “mechanism to report spam, death threats, hate speech, NCII or any other kinds of abuse”.

You cannot build a digital universe then propose analogue solutions to the problems it creates or brings in its wake.

 

Posted in Child abuse images, Facebook, Internet governance, Microsoft, Regulation, Self-regulation | Leave a comment

Age verification on the move. Porn is the target

Yesterday’s Daily Telegraph carried a great piece in which Rachel de Souza, England’s new Children’s Commissioner, makes clear her ambitions for age verification in general but in particular in respect of pornography sites. She obviously believes introducing age verification to protect children from porn sites is an urgent priority and worries that the provisions of the Online Safety Bill (OSB) as currently drafted are not strong enough.

de Souza  is right and will find a great deal of support from across a wide range of children’s organizations, women’s organizations and many other civil society bodies.  

More media attention 

This morning I was interviewed on national radio about the Children’s Commissioner’s article which also seems to have prompted a leader in today’s Times where they said 

As almost everyone acknowledges, it is beyond time for tougher laws to protect children from harmful, abusive and pornographic social media.

Nevertheless it goes on to note 

…there is considerable disquiet among MPs. This is not because anyone opposes…. protections; it is because there is no compulsion on media companies to enact  (them). 

Hear hear.

The “considerable disquiet” is being expressed, for example by Damian Collins MP, who is Chair of the Pre-Legislative Scrutiny Joint Committee on the OSB. Collins  says

We need to look at the role robust age-verification can play.

Hear hear again.

Age verification is not a panacea, not a silver bullet, but it is a bullet. It has worked outstandingly well in respect of keeping children away from all the traditional forms of online gambling. There is absolutely no doubt it could do the same elsewhere, including protecting children from porn and other online harms where there is a legally defined or contractually prescribed minimum age.

It is a counsel of despair to say you accept that age verification is a good thing but you won’t agree to its introduction anyway because you are worried about how it might be misused. The challenge is to devise governance or supervision mechanisms which ensure as far as is humanly possible that there is no misuse. We all need to be confident it is only doing what it says on the tin. If we do not think we can create such systems of governance or supervision then the future is indeed grim for everyone but the uber geeks.

The end of internet exceptionalism

What I think is really going on is, as the internet has become more and more integrated into all aspects of the nation’s  political life, social life, family life and children’s lives we are seeing the end of any notion of internet exceptionalism.

Or rather  more and more of us are no longer willing to accept the wildness of the early years. More and more of us are insisting that the internet and all its works have to be much more closely aligned with our expectations in respect of other types of media and communication tools which now  move among us.

An intensely political period beckons

We all need to be clear. We are entering a period of intense political activity. The  opponents of age verification for porn sites are the usual suspects, the ones we only  normally hear about in children’s debates when they are pointing out why something should not be done, when they are opposing this or that new idea or suggestion. They have a locker full of alibis for inaction. Innovation is cool everywhere but not here. 

Let’s get rid of the most absurd argument right away

Looking ahead to any battles there might be around online porn and children’s access I really do not want to hear anyone say Pornhub or similar can play or has played any kind of useful or positive role in the sexual education or relationship counselling of children.

The fact that some kids may have said they are cool with Pornhub (false bravado aside) or they say they found it  “helpful” in some way, emphatically does not give anyone a licence, much less a mandate, to continue with or to tolerate the status quo. 

To the extent that such sites might (and it’s a mighty, doubting might) have been “helpful” or informative to anyone in the past  it is only a reflection of  historic failings  and lack of  any better alternatives.  No way is it an endorsement or a thank you to Pornhub.

Pornhub was never designed or intended to be an aid to children. It was designed and intended to make money by providing easy access to graphic  forms of sexual imagery for the purpose of promoting sexual stimulation. Education or relationship counselling was not on the list. Children do not have a right to Pornhub. Children have a right to good sex education and states have an obligation to provide it.

Being young is about being a rebel

Children say they are in favour of loads of things their parents or the law forbid them or say are bad for them. It doesn’t make the children right or their parents or the law wrong. 

The doctrine of “evolving capacities” can hit up against any number of brick walls and this is one of them. Do we really want porn sites making individual assessments of whether or not this specific 17 year old or that 15 year old actually would be “cool” and unharmed, even helped, by showing them some or all of their wares? The idea is absurd.

We are drawing to the close of an era

Somewhat prosaically we are drawing to a close an argument that began at least as far back as the Gambling Act 2005 when, for the first time anywhere in the world, a  requirement was introduced to require online companies to introduce robust age verification. Online age verification is no longer a wild and whacky idea. It has moved to the mainstream. The technology required is trivial. The will to use it  on a wider scale has been missing. We are going to fix that.

But at a deeper level, to return to an earlier theme, in the UK and elsewhere in the liberal democracies, we are starting to see the internet first and foremost as a consumer product which, for all its many valuable features, must at its core behave as if it was fit for the consumer space.

Maybe some of the magic or the glitter of the internet will fade. I have a twinge of nostalgia for the excitement of the early days but the world has moved on and the internet cannot stand outside of it, frozen in virtual aspic.

Posted in Age verification, Default settings, Internet governance, Privacy, Regulation, Self-regulation | 1 Comment

My letter in today’s Financial Times

Today the Financial Times has published a letter by me in which I applaud Apple’s decision concerning its plans to limit the possibility of child sex abuse material being distributed via their devices or network. I also suggest it will force Facebook to reconsider its extremely bad intentions.

I am not sure of the etiquette or copyright position vis-a-vis the author in relation to a letter he has penned but below is the text of it anyway. If this blog suddenly disappears and you can’t get hold of me please come and visit me in jail. Bring grapes.

“In his article about Apple’s plans to introduce new child protection policies, Richard Waters suggests the way Apple went about it had “cut short debate” about the potential impact of their planned measures (Opinion, August 10).

Specifically Waters refers to Apple’s plan to inspect content on users’ devices before it is uploaded and placed into a strongly encrypted environment such as iCloud. Apple is going to do this in order to ensure the company is not aiding and abetting the distribution of child sexual abuse material.

Sadly the “debate” has been going for at least five years and for the greater part of that time it has been completely frozen. Things intensified when, in March 2019, Facebook announced it (was) going to do the exact opposite of what Apple is now proposing. That too was a unilateral decision, made all the worse because, unlike with Apple, it was against a well-documented background of Facebook already knowing that its currently unencrypted Messenger and Instagram Direct platforms were being massively exploited for criminal purposes.

In 2020 there were 20,307,216 reports to the US authorities of child sexual abuse material which had been exchanged over either Messenger or Instagram, but Facebook has so far given no sign that it will row back.

The argument is, I’m afraid, a binary one. Once material is strongly encrypted it becomes invisible to law enforcement, the courts and the company itself. So either you are willing to live with that or you are not. Facebook is. Apple isn’t.

However, I suspect Apple’s decision will force Facebook and others to reconsider. There are scalable solutions available which can respect user privacy while at the same time bearing down against at least certain types of criminal behaviour, in this case terrible crimes which harm children.

If people believe Apple or indeed malevolent governments could misuse the technology, that is an important but different point which speaks to how we regulate or supervise the internet. It is emphatically not an argument which allows companies to continue doing nothing to curb illegality where technology exists which allows them to do so. Apple should be applauded. It has not just moved the needle, it has given it an enormous and wholly beneficial shove.”

Posted in Uncategorized | Leave a comment

Bravo Apple

There has been great rejoicing at ECPAT International’s global HQ in Bangkok. Last week the work we have been doing with our partners around strong encryption received an enormous boost when Apple made a hugely important announcement about their plans to keep children safe online. Not everyone likes it but we love it.

The cat is out of the bag

The cat is now very definitely out of the bag. Apple has confirmed a core contention advanced by ECPAT. There are scalable solutions available which do not break encryption, which respect user privacy while at the same time significantly bearing down on certain types of criminal behaviour, in this case terrible crimes which harm children.

If people believe Apple or malevolent Governments could misuse the technology, that is an extremely important point but it is a different one. It speaks to how we regulate or supervise the internet. It is emphatically not an argument which allows companies to continue doing nothing to curb illegality where technology exists which enables them so to do. Equally it is not an argument for Apple to “uninvent” what it has already invented.

What it is is an argument for Governments and legislatures to catch up. Quickly.

In the world of tech, alibis for inaction are always thick on the ground. Apple should be applauded. They have not just tinkered with the needle they have given it an enormous and wholly beneficial shove. The company has not moved fast and broken things. It has taken its time and fixed them.

So what is Apple planning to do?

Apple’s announcement contained three elements. Later this year, in the next version of their operating system, first in the USA then country-by-country they will:

  1. Limit users’ ability to locate child sexual abuse material (csam) and warn about online environments which are unsafe for children.
  2. Introduce new tools to help parents help their children stay safe in relation to online communications, in particular warning about sensitive content which may be about to be sent or has been received.
  3. Enable the detection of csam on individual devices before the image enters an encrypted environment. This will make it impossible for the user to upload csam or distribute it further in any other way.

Number three is what has prompted the greatest outcry.

A game changer

The mere fact a company like Apple has acknowledged they have a responsibility to act in this area, and have come up with a scalable solution, fundamentally changes the nature of the debate. Now we know something can be done, the “it’s not possible” position has been vanquished. Any online business which refuses to change its ways likely will find itself on the wrong side of public opinion and, probably, the law as legislators around the world will now feel emboldened to act to compel firms to do what Apple has voluntarily chosen to do.

And all the angst?

Several commentators who otherwise appeared to express sympathy for Apple’s stated objectives nevertheless couldn’t quite resist trying to take the shine off the company’s coup de théâtre by complaining about the way they did it.

However, in 2019, Facebook’s unilateral announcement that it intended to do the exact opposite of what Apple is now proposing suggests the possibility of reaching an industry consensus was wholly illusory.

I am sure many “i’s” need to be dotted, many “t’s” need to be crossed, but sometimes I feel when it comes to protecting children everything has to be flawless out of the traps. It is OK for Big Tech to get it wrong everywhere else and fix things later, or not, but that cannot be allowed to happen in this department. It is OK to innovate madly, but not here. We are judged by a different standard.

Don’t get me wrong. I am not in favour of imperfection. I do not applaud innovation or recklessness that pays no heed to the downside.

The simple truth, though, is this whole business has been and is about values and priorities. It is binary. Either you think steps should be taken to minimise risks to children before content is encrypted or you don’t. There is no middle way because when the content is encrypted the content is invisible forever. The bad guys win. Apple has shown how they lose.

Encryption is not broken. No new data is being collected or exploited

In a further statement issued by Apple yesterday they make it abundantly clear and underline that they are not breaking any kind of encryption. They also make it clear their technology is limited in scope and they will not use it for any other purpose.

If you don’t believe that we are back to the point I made earlier. Let’s discuss that but whatever the outcome of the discussion might turn out to be Apple must be allowed and encouraged to carry on. I eagerly wait to hear other companies pledging to follow in their footsteps. Soon.

Posted in Apple, Child abuse images, Default settings, Facebook, ICANN, Internet governance, Microsoft, Regulation, Self-regulation | 1 Comment

The importance of Clause 36(3), money and general monitoring

OK. I am going to shout it out loud, or rather I am going to put it in writing, in public, which is sort of the same thing.

There is a great deal in the UK’s Online Safety Bill (OSB) I like. A lot. Stuff we have been campaigning for over many years. However, it is also clear “le diable sera dans le détail”or, as in this case, “les codes de pratique et règlements.

If you don’t mind being accused of being, er, a poseur, if you are going to say something utterly banal it probably helps to say it in a foreign language. It suggests this is no ordinary, banal banality.

In other words, on top of what appears on the face of the Bill, the success of the OSB in no small measure is going to be determined by a whole series of codes of practice and regulations which Ofcom and the Secretary of State will draw up. Remember “whole series”. I will return to it. But first:

Clause 36 (3)

Clause 36 (3) of the OSB tells us why, in particular, the codes of practice matter:

A provider… is to be treated as complying with [the] safety duties for services likely to be accessed by children…if the provider takes the steps described in a code of practice…”

The OSB says similar things in respect of other codes that will be published on reporting, record-keeping and transparency duties, terrorist content, legal but harmful content, and the like. Codes of practice and regulations are going to carry a heavy burden. For now I will focus on children-related dimensions.

Thus, in terms of legal compliance and liability it seems if platforms do what the codes prescribe they will retain the same broad legal immunity which up to now has protected all intermediaries, irrespective of their size. The OSB does not expressly say that but broad immunity is an established part of the background radiation (the eCommerce Directive?) so at least one eminent lawyer believes that to be the case.

I have no quarrel with that. In my view, if a platform meets the terms of the OSB, the codes and regulations, they are entitled to retain broad immunity in relation to items posted by third parties where, prior to notification or discovery, they had no knowledge.

After all, the codes will be detailed and will decisively shape the behaviour of intermediaries. Turning to child sexual abuse material, for example, there is no doubt or ambiguity in relation to precisely what is expected of platforms (see below).

The logic of the codes of practice

And if an intermediary does not follow the codes, regulations or the terms expressly stated in the OSB? What then?

There will be a system of fines and other penalties. These are set out in the OSB or will be in what follows. However, the likely effectiveness of these fines and penalties are being argued about, not least because of doubts about Ofcom’s ability or inclination to mount and sustain an enforcement regime on the scale required.

The risk is obvious. If platforms conclude Ofcom is a paper tiger or is so overstretched they have little to fear any time soon we will have failed.

Platforms must believe there is a serious risk they could be turned over, held accountable, and not in the far distant future.

Ofcom needs an ally. Children need an insurance policy. I have one.

No compliance? Lose the relevant immunity.

Thus, for the avoidance of doubt, somewhere in the OSB it should be made explicit that where a platform governed by a code of practice or other regulations fails to honour the terms, not only could it become subject to the penalties the OSB will usher in, it will also forfeit any and all criminal and civil immunities from which it would otherwise have benefitted.

To be clear: I am not suggesting if platforms fail to honour the terms of a code or regulations they forfeit all immunities in respect of everything they do. That would be unreasonable.

But where a reasonably foreseeable actual harm has resulted or is alleged to have resulted from a failure to implement the terms of a code or regulations then whoever can be said to have been injured as a result should be free to bring an action which would previously have been barred or would have failed because of the immunity. The immunity is therefore lost only insofar as it concerns and is limited to the reasonably forseeable harm suffered by an identifiable individual or group.

Something like this would focus the minds of every Director or senior manager of every platform and would relieve Ofcom of a great deal of the responsibility for ensuring online businesses are routinely following the law rather than just hoping they never get caught or inspected or if they are it will be some time hence when today’s culprits might have already vanished with the loot.

“Whole series”. Big burden

It is apparent we will soon be seeing a raft of draft codes of practice which Ofcom has to prepare. Doubtless there will also be drafts issued by the Secretary of State in relation to his powers and obligations.

No problem. In principle. But…..how will things work in practice?

A vast army of in-house and trade association lawyers and many lawyers in firms hired to supplement them are going to be able to buy their second or third yachts off the back of the work on the consultation and implementation of these codes and related regulations. Some of the preparatory analysis will already have happened and be feeding into Big Tech’s extremely well-funded lobbying strategies.

Money

So how is civil society’s voice going to be heard? I know of no children’s organization in the UK which has the capacity to engage with these processes to anything like the degree that is going to be required or for the period of time entailed.

Every children’s charity is strapped for cash. A great many Charitable Foundations that sometimes step into the breach similarly are having a hard time. Yet if the proposed new regime is to work to best effect and in the way the Government intends, Ofcom or someone other than Big Tech needs to provide some cash.

I am not suggesting we can ever achieve a level-playing field as between children’s organizations, the civil service and Big Tech but something must be done to ensure the tables are not so vertinginously tipped against children’s interests being represented. The processes which lie ahead are going to require a sustained level of detailed engagement.

If there already was an industry levy which Ofcom administered that would be the obvious solution. But there isn’t so maybe as the OSB progresses through Parliament the Government can address this vital question.

General monitoring? No.

One of the supposedly sacrosanct articles of faith of internet governance hitherto has been that intermediaries should be under no obligation to undertake “general monitoring”. It first appeared in the USA courtesy of s230 of the CDA. We copied it in the eCommerce Directive of 2000. It lay at the root of much that later went wrong for children online albeit it took some time for us all to realise it. However, once we did realise it there was no excuse for sticking with it. Yet that is precisely what the EU appears intent on doing.

In the EU’s draft proposal for a new Digital Services Act (DSA) the immunity provisions are repeated eight times e.g. as here on page 13.

The proposed legislation will preserve the prohibition of general monitoring obligations of the e-Commerce Directive, which in itself is crucial to the required fair balance of fundamental rights in the online world.”

It is then further elaborated and developed in Article 7

“Article 7

No general monitoring or active fact-finding obligations

No general obligation to monitor the information which providers of intermediary services transmit or store, nor actively to seek facts or circumstances indicating illegal activity shall be imposed on those providers.  (emphasis added)

The bit in bold is a remarkable thing for any organization to say if it also wants to claim it is concerned with upholding the rule of law. I paraphrase:

Dudes. Chill. You don’t have to try and find out if any criminals are using your facilities to abuse children. Nah. Spend more time on the beach. Or innovating. Your choice. No pressure.”

The UK is going its own and better way

I am very pleased to say the UK’s OSB does not repeat the archaic and ridiculous formula of the EU’s proposed Article 7.

But make no mistake, neither does the UK impose a ” general monitoring” duty. It solves the problem in a different way by imposing quite specific, targetted objectives and requirements.

Here’s an example. Clause 21 (2) of the OSB, sets out the duties which all platforms have in respect of illegal content, of which child sexual exploitation and abuse (CSEA) is a priority category. Providers must take

proportionate steps to mitigate and effectively manage the risks of harm to individuals, as identified in the……illegal content risk assessment.”

In 21(3) in respect of providers of search facilities, the wording is even more explicit. They have a duty to:

minimise the risk of individuals encountering priority illegal content.”

Is that an instruction to engage in general monitoring? No it is not. It is an instruction to use available, reliable and privacy-respecting technical tools to detect known illegal content.

What could be wrong with that?

Posted in Child abuse images, Default settings, E-commerce, Internet governance, Regulation, Self-regulation | Leave a comment

Age verification in the EU moving forward

The Advisory Board of euConsent held its first meeting last week.

euConsent aims to deliver a framework of standards which will encourage the development of a pan-European network of providers of online age verification and parental consent mechanisms.

Hugely important – global significance

It is hard to overstate the impact this project could have on the way the internet is used not just in Europe but potentially around the world. Many honest efforts by Regulators to protect children online have found it difficult to solve several key challenges which are rooted in the transnational nature of the medium. One of the most obvious and pressing concerned age verification. The European Commission recognised a pan-European solution is required. They ran a competition to select a team to tackle the problem. euConsent was the result.

Highest levels of data security

With euConsent a solution is in sight. Users will be able to verify their age or give consent for their children to use a site or service without disclosing their identity. All age verification providers who are part of the network will be independently audited to certified levels of assurance. Lawmakers, services, and Regulators can choose how and where the requirements will be applied. All providers will operate to the highest standards of data security.

If it is to be a success such an important project needs vigorous and rigorous scrutiny as it progresses through its different phases.  An Advisory Board has been established and I agreed to be its Chair.  The Board comprises representatives of a wide range of stakeholders: European regulatory authorities, children’s rights organizations, tech companies and politicians.  We held our inaugural meeting last Friday.

The Board will hold the project team accountable, helping them as they establish the standards. The Board’s collective and individual insights will contribute to a system that is workable with existing technology and facilitates the creation and implementation of effective regulations. Any new technologies which may emerge will know what they must be able to do if they are to be recognised as an acceptable tool.

Research evidence

Our first meeting was very encouraging. The initial research phase of euConsent has been conducted by academics from Leiden University, Aston University and the London School of Economics and Political Science, supplemented by further work from the Age Verification Providers’ Association, and the research firm Revealing Reality. These groups presented their key findings to the Advisory Board who were impressed by the scope of what has been done so far. Board member Anna Morgan, Deputy Commissioner at the Irish Data Protection Commission, found the evidence-based foundations of the project really promising. Almudena Lara of Google was pleased the opinions of children themselves are being sought and listened to in the research conducted by Revealing Reality.

Having such a spread of experts all gathered in the same Zoom produced a series of lively interchanges which were immensely valuable! Even at this early stage some key issues were raised. Negotiating the tension between data privacy and child protection lies at the heart of what we are trying to do, and how to cope with the already existing different regulatory approaches across jurisdictions is no less important.

I am looking forward to engaging with the Advisory Board further as euConsent’s technical solutions are developed and released over the coming months.

Posted in Age verification, Default settings, E-commerce, Internet governance, Pornography, Privacy, Regulation, Self-regulation | Leave a comment