Privacy warriors arrive late

Governments and legislators stood by and watched for years while the internet exploded, bringing in its wake huge benefits but also several downsides, particularly for children.

“Permissionless Innovation” was the watchword. We even created special legal immunities to help things along, the idea being new stuff would be tried out around the edge of the network without anybody having to sign a form in triplicate, get a green light from “higher up” or worry about a writ or subpoena. This created a reckless culture which only now is beginning to be addressed in every major democracy. In the case of the EU this will be through the Digital Services Act.

Innovation under attack

Against this historical background, pardon me if a wry smile passes my lips when I hear the anti-grooming programmes, classifiers and hash databases being attacked. These are examples of innovation. These are examples of techies trying to find better ways of doing things, in this case keeping children safe. The very opposite of reckless. As Microsoft’s Affidavit attests, these tools are not supersmart tricks designed to make more money for whoever deploys them although given the history of Big Tech the suspicion that they might be is completely understandable.

And who is attacking the innovative child protection tools just mentioned? Not people who are habitués of platforms where children’s rights and safety are discussed.  Most of the attackers are substantially identified with completely different agendas, principally the privacy agenda.

Of course everybody is entitled to an opinion but if some of us who regularly plough the furrow of children’s rights and safety seem confused as to precisely why these privacy warriors are suddenly taking a deep interest in children, I hope they will not take it personally and understand why.

Is this what the drafters of the GDPR intended?

When passing the GDPR did the European institutions expressly intend to make it difficult to detect and delete images of children being raped? Did they knowingly plan to make it easier for a paedophile to contact a child?

No. The very idea is absurd

So if there is any legal basis at all for the critics’ arguments about proactive child protection tools, and I do not believe there is, it arises solely as an unanticipated, unintended consequence of a set of rules drafted principally for other purposes.

We need politicians to fix that problem, not manipulate or take advantage of it.

A collective mea culpa

If we had already constructed a transparency and accountability regime in which we all had confidence I doubt these issues would even be being discussed.  But we haven’t. For this we are all to blame, in varying degrees. The answer is to get on with building that regime not risk putting children in harm’s way.

I am certain much common ground could be found if we were not immersed in the unwanted, pressured environment the current, highly unusual circumstances created.

We shouldn’t confuse jurisprudence with politics

As in all things there will be issues of balance and proportionality but in Europe aren’t these, essentially, jurisprudential questions to be determined in accordance with, for example, the European Convention on Human Rights, the EU’s Charter of Fundamental Rights and case law?  Should I add the UN Convention on the Rights of the Child and the Lanzarote Convention, to which every EU Member State has signed up? You decide.

Politicians should not take it upon themselves to say “we cannot do this or that because it is illegal or we must do the other because the law requires it” if all that amounts to is using the law as a cover for politics, or as a way of dodging responsibility for something you know could otherwise be unpopular.

The institutions will not allow laws to pass which ex facie are illegal. And if they do, neutral judges will resolve things.

Zero evidence of harm. Tons of evidence of good

Where is the evidence the use of anti-grooming tools, classifiers or hash databases has harmed anyone? There isn’t any.

But we have lots of evidence of the good the tools are doing.

Images

Look at the number of csam reports being processed by NCMEC and how many of these resolve to  offenders in EU Member States: 3 million in 2019 and until 1st October 2020 2.3 million. 95% of these were derived from messaging, chat and email services. 200 children in Germany were identified. 70 children in The Netherlands. And there is more of this kind of information available country by country.

Grooming

Look at the concrete evidence showing how anti-grooming tools are protecting children in Europe. And the classifiers work in a similar way.

Between 1st January 2020 and 30th September NCMEC received 1,020 reports relating to the grooming and online enticement of children for sexual acts where these reports resolved to EU Member States.

905 were the result of reports made by the companies themselves, generated by their own use of tools. Only 105 were the result of manual reports by the public. 361 reports came from chat or messaging apps.  376 came from social media.  These led to action to save one or more children in Belgium, France, Germany, Hungary, The Netherlands and Poland. Tell me again why we should junk the tools?

Human review is an integral part of all the processes

There is always human review before any action is taken on something that is flagged by a classifier or an anti-grooming tool. Relying only on keywords is absolutely not what is happening. Context can be vital. But the tools do not comprehend, analyse, record or keep conversations or messages. They pick up on signs which are known to point to perils for kids. No signs. No action. Nothing happens. Just like sniffer dogs at airports.

And by the way, no image goes into a hash database of csam without it first having been reviewed, normally by at least three sets of human eyes. It does not need to be looked at again after that before it goes to law enforcement or before the image is taken down. That defeats the whole point of automating this part of the process. Among other things, don’t we want to minimise the number of times individuals look at things like that? Yes we do.

SIGN THE PETITION

change.org/childsafetyfirst

Posted in Child abuse images, Default settings, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized | 2 Comments

The wisdom of Max Schrems

I met Max Schrems at a seminar in a law school in the USA last year. He opened his remarks by saying in preparing his comments for the seminar he tried to talk to lawyers in the privacy community who specialised in or knew about children’s rights in the context of privacy law. What he said was “I couldn’t find anyone” or “there weren’t that many”. 

In part what we are seeing  in the current debacle in Brussels is a product of that. The privacy community is largely a stranger to the world of online child protection. That must change, and soon.

Here is my brief summary of yesterday’s meeting of LIBE followed by a few observations.

Summary

There is a lot of support for the temporary derogation but, as things stand, it may not be enough to get us over a satisfactory line. We need to keep lobbying.

There are still some worrying misconceptions and misunderstandings kicking around. Unless they are addressed they could sink the tools by making them useless.

Very restrictive

The lead Rapporteur, Birgit Sippel, seems happy to allow tools to continue to be deployed for up to two years providing they only identify material classed as “child pornography” within the meaning of Article 2 of the 2011 Directive.

I believe that would kill off classifiers and the anti-grooming tools. This must be resisted but I think, in part, some people’s doubts are based on a fundamental misconception in relation to how the technologies work (see below).

More problematic is Ms Sippel’s suggestion that nothing is reported to the police unless there has been prior human review. That defeats the whole point of automated proactive systems.  The numbers are just too big. That’s precisely why these tools were developed.

What is essential is that there is an exceptionally low error rate. Professor Hany Farid says PhotoDNA works with an error rate of around  one in a billion or less.

I don’t have a problem with Ms Sippel’s ideas around digital impact assessments, consultations or evaluations of the software, on the contrary they sound great, but they cannot be made conditions precedent because that, in effect, means halting everything until goodness knows when.

And the issue about data transferring to the USA could also be another serious obstacle.

Privacy as a barrier to child protection? No.

We want privacy to protect our health and medical records, to stop companies sneakily snooping on us so they can sell us more stuff,  we want it to protect our banking transactions, our national infrastructure, to force companies to take stronger measures to prevent hackers getting our personal data and, yes, to stop unwarranted invasions of our private lives and communications by the state and other actors, bad or otherwise.

But look at Facebook’s announcement last week. Children in all parts of the world were benefitting from protections Facebook had implemented to detect threatened suicides and self-harm. Everywhere in the world except the  EU. Done in the name of privacy.

Now it seems, also in the name of privacy, tools could be banned which help keep paedophiles away from our children or which help the victims of child rape regain their human dignity by claiming their right to privacy.

Not understood the technology

At LIBE there were several references to “scanning everybody’s messages”. That is not what is happening with any of the tools we are trying to preserve.

When we used to go to airports, dogs would walk around sniffing lots of people’s luggage searching for drugs and other contraband. The machines airport staff put our luggage through do something similar with x-rays. When we post letters or parcels the Post Office or the carrier employs a range of  devices trying to detect illegal items that might be in any of the envelopes or packages they are planning to deliver for us or to us.

Are the airport authorities or the postal services“scanning” everybody’s mail or luggage? No. At least not in any meaningful sense.

The child protection tools we are discussing are like the dogs at the airport, the luggage X-ray machines, or the devices in the Post Office sorting room.

They are looking for solid signs of illegal content or behaviours which threaten children. No sign. No action.

Could the tools be misused?

Could scanning tools be misused for other purposes? Yes they could. How we address that and reassure ourselves it is not happening is important but the tools we have been discussing have been in use, in some cases, for over ten years and we have ample evidence they are doing a good job. We have zero evidence they are doing a bad job.

Who would want to stop them doing that good job just because a variety of bureaucrats didn’t do theirs when they should?  That is what this boils down to.

We have to find a way to allow the tools to carry on while we construct a durable, long-term legal basis and oversight and transparency regime.

Those who claim protecting children in the way these tools can do is “disproportionate”  should recall that proportionality, like beauty, is in the eye of the beholder. And in every legal instrument I know we are told children require special care and attention because they are children.

 

Posted in Child abuse images, Default settings, E-commerce, Privacy, Regulation, Self-regulation | Tagged | 2 Comments

Don’t be a child in Europe

Yesterday the European Data Protection Supervisor (EDPS) published an opinion on the European Commission’s proposal for a temporary suspension of parts of the e-Privacy Directive of 2002. It is a weak Opinion, riddled with error. The good points the EDPS makes are dwarfed and completely overshadowed by the bad.

A rebuke

A major part of the Opinion, in essence, is a rebuke of European Institutions for not doing things in precisely the right order, in exactly the right way at the right time.   The Opinion shows an abundance of bureaucratic correctness which entirely misses the human heart of the issues at stake, as well as important parts of the law.

Everywhere else, in every legal instrument I have ever read, including the GDPR, we are told children require special care and attention. Why? Because they are children. The EDPS affords them no such considerations. 

Article 24 of the Charter of Fundamental Rights

The EDPS makes no reference to the explicit language of the EU’s Charter of Fundamental Rights. Nada.  Pas un mot. As an aide-memoire I repeat the key words here:

The rights of the child

  1. Children shall have the right to such protection and care as is necessary for their well-being…..
  2. In all actions relating to children, whether taken by public authorities or private institutions, the child’s best interests must be a primary consideration.

The EDPS never once even mentions the rights of children. If there is a balance to be struck he shows no signs of knowing how to locate the fulcrum.

A child’s right to privacy? Not mentioned

Search the document high and low. There’s nothing there. No mention of the legal right to privacy of a child who has been raped where pictures of the rape have been distributed for the whole world and her classmates to see. Not one word.

A child’s right to human dignity? Not mentioned

Neither is there any mention of a child’s legal right to human dignity which, in this case, entails getting the images of their humiliation off the internet, away from public view, to the greatest extent possible, as fast as possible. Not one word. 

The EDPS misunderstands the technologies

The technologies being debated do not understand the content of communications. They work in an extremely narrow and specific way.

If I go to a zoo wearing spectacles that only allow me to see zebras, the giraffes, lions and penguins will be invisible to me. They may pass in front of my unseeing eyes, but they might as well not be there. All I see are zebras.

This is how PhotoDNA works.  The EDPS is therefore simply, factually wrong when (page 2 and paras 9 and 52) he suggests there is any

“monitoring and analysis of the content of communications”

PhotoDNA only sees the zebras. In this case the zebras are the already known images of a child being sexually abused. That is to say an image that should not be there in the first place, which nobody has any right to possess never mind publish or distribute.

And the other child protection tools work in similar ways. They do not “analyse” the content of a communication. They cannot say what the picture is about or what a conversation is about. They can only say whether the communication contains known signals of harm or known signals of an intention to harm a child.

Do we really want companies to be indifferent and inert?

Does the EDPS want companies wilfully and knowingly to blind themselves to heinous crimes against children? Is he suggesting they should be indifferent to and inert towards what they are facilitating on their platforms?

A resolution of the European Parliament says otherwise

Law enforcement agencies have repeatedly stated it is completely beyond them to address these issues alone. They rely and depend on tech companies doing their bit, a fact recognised by the European Parliament less than a year ago.  On 29th November 2019 in a resolution  at para 16 we see the following:

“Acknowledges that law enforcement authorities are confronted with an unprecedented spike in reports of child sexual abuse material (CSAM) online and face enormous challenges when it comes to managing their workload as they focus their efforts on imagery depicting the youngest, most vulnerable victims; stresses the need for more investment, in particular from industry and the private sector, in research and development and new technologies designed to detect CSAM online and expedite takedown and removal procedures;”

How do scanning tools work?

The EDPS makes no reference to other types of scanning taking place on an extremely large scale, such as for cyber security purposes.  At a webinar organized by the Child Rights Intergroup on 15th October Professor Hany Farid made the following observations (at 24.28):

“If you don’t think that PhotoDNA and anti-grooming have a place on technology platforms then I ask you to do the following: turn off your spam filter, turn off your cybersecurity that protects from viruses, malware and ransomware because that is the same technology. And if you believe that we should use a spam filter and if you believe that you should protect your computer from viruses and malware, which I think you do, and if you believe that that technology has a role to protect this computer right here, then why shouldn’t these technologies protect children around the world? At the end of the day it is exactly the same technology, simply tackling a different problem.”

No mention of Microsoft’s Affidavit

On 14th October Microsoft published a sworn Affidavit in which the following words appear at para 8:

“PhotoDNA robust hash-matching was developed for the sole and exclusive purpose of detecting duplicates of known, illegal imagery of child sexual exploitation and abuse, and it is used at Microsoft only for these purposes.”  

At a LIBE Committee meeting it was suggested that companies were scanning content, ostensibly looking for illegal content then processing the data they collect for commercial purposes. Leaving aside the fact that this would be illegal anyway, the Microsoft Affidavit, under acknowledged pain of perjury, expressly states that is not happening.

Microsoft also published the terms of its licence which gives other companies and organizations permission to use PhotoDNA.

The EDPS makes no reference to the Affidavit. If it would help preserve the use of online child protection tools, surely other companies would be willing to swear similar Affidavits? Such Affidavits could remain in force at least until this matter is resolved, and even beyond if necessary.

The EDPS says he is worried about precedents

The EDPS says (para 53):

“The issues at stake are not specific to the fight against child abuse but to any initiative aiming at collaboration of the private sector for law enforcement purposes.” (emphasis added).

Here the EDPS abandons lawyer’s clothes and dons those of a (not very skilful) politician or campaigner.

This is the notorious “slippery slope” argument. It is morally and intellectually bankrupt.  A demagogue’s trick. A sleight of hand.

The unnamed terror

What is the unnamed terror the EDPS is worrying about?  We are not told. Isn’t the position clear? The proposed suspension is entirely and only about the protection of children. Nothing else. Nothing that isn’t written in the document.

It is quite wrong and legally completely incorrect, to plead a concern for something that is not on the table, not in anyone’s line of sight.

If something comes up in the future deal with it on its merits.  If you agree with it say “yes”. If you don’t, say “no”.  Lawyers are meant to be able to distinguish between cases based on the facts.

Punishing children for other people’s mistakes

I have no brief to defend the Commission, much less the history of events leading up to  their proposal. But whatever the history, it is completely unacceptable to allow the tools to become illegal on 20th December only because nobody managed to sort this out to the satisfaction of the EDPS before now. 

That amounts to intentionally putting children in danger, punishing them for the past failures of others, adults who should have known better and acted differently sooner. Shame, shame.

Don’t be a child in Europe

Next week at the LIBE Committee meeting, if Members of the European Parliament are persuaded by the EDPS report, if it is ultimately reflected in the decision of the upcoming Trialogue and the tools are outlawed,  my advice is clear: “don’t be a child in Europe.”

Be a child somewhere else.

Posted in Child abuse images, Privacy, Regulation, Self-regulation, Uncategorized | 2 Comments

Joy tinged with anger

At 5.00.a.m. today the Head of Instagram published a blog entitled “An important step towards better protecting our community in Europe”

There is much that is important and of interest in Facebook’s blog so please read it but here, for me, are the key sections:

“We use technology to help.. proactively find and remove..suicide and self-harm content…Between April and June this year, over 90% of the suicide and self-harm content we took action on was found by our own technology before anyone reported it to us. But our goal is to get that number as close as we possibly can to 100%. 

Until now, we’ve only been able to use this technology to find suicide and self-harm content outside the European Union. 

European children deprived of protection

So children and young people everywhere else in the world have been benefitting from Facebook’s deployment of proactive tools which help stop young people killing or harming themselves. Children in Europe haven’t been. Why?  To answer that we have to look to the Irish Data Protection Commissioner (DPC).

Seemingly, having started monitoring this type of content in 2017,  Facebook raised the matter with the DPC back in March 2019.  The DPC “strongly cautioned Facebook because of both privacy concerns and a lack of engagement with public health authorities in Europe on the initiative.”

Facebook followed the DPC’s advice and consulted with health authorities. Nevertheless   the DPC still said “concerns remain regarding the wider use of the tool to profile users.. culminating in human review and potential alerts to emergency services”.

You might want to read that again. It’s hard to believe  anyone could be anxious about the possibility an ambulance or a police officer could go knocking on a door in the expectation of saving a life and for that to be frowned on or obstructed. Certainly in the UK we are constantly told to contact the emergency services if we have any reason at all to suspect someone is in danger, particulary if that someone is a child.

Just to remind you, in the GDPR and in every legal instrument I know, the position of children is said to require extra care and attention. Yet  it is starting to feel that whenever a traditional privacy lawyer writes or drafts something things end up all wrong. Go figure.

And by the way there are no issues of principle associated with Facebook sending a message to the police or the ambulance service if someone has made an individual, manual report to them about a person they believe is at risk.  It is only if the tools are deployed proactively, at scale, that the DPC gets agitated.

So a malicious  or mischievous report gets acted on, while a genuine one can’t be found by a machine. Where’s the logic in that?

Have we taken leave of our collective senses?

Could the tragic death of Molly Russell have been avoided if these tools had existed then? Who can say? But equally I am certain I will not be alone in wondering what kind of world we are creating if, in the name of privacy, we allow these things to happen when we had the possibility of stopping or reducing them.

We have been content to allow the internet to do things that not many years ago would have seemed utterly unbelieveable. Saving children’s lives? That’s where we draw a line?

Emotional? Too right it’s emotional

I have heard it said that we shouldn’t be too emotional about these questions. Excuse me. What that is actually saying is we should detach ourselves from our humanity. It hardly matters to me what impact technology might have on a lump of concrete or other inanimate object but, if you have it within your power to stop pain, death or suffering by another human being, only a dessicated robot could turn away and say “no”.

The technology that has built huge fortunes for entrepeneurs  and pays vast salaries to its employees who know the colour of your socks, where you go on holiday and what you eat for breakfast cannot be turned to saving lives? I understand about “balance”  and “safeguards” but whenever I hear those words what I am usually hearing is “no” again.

It’s not about privacy. It’s about trust

The mantra of the internet has been about innovation and the wonderful benefits technological advances can produce.

So now technology allows us to detect when a child is contemplating killing themselves.  We have technology which allows us to detect when a paedophile is attempting to groom a child. We have technology which can help protect the privacy rights of children who have been raped and further humiliated by having images of their rape broadcast to the world.

Why would we not use them?

Because some people do not trust Big Tech to use these tools lawfully i.e. in ways which do not exploit people’s  data in a manner that the law anyway already forbids.

The real answer, therefore, is to address the lack of trust in Big Tech. And that means addressing transparency. And the fact that our politicians and institutions have so far failed to do this is no reason, now, to make those tools illegal. That is treating a symptom not the disease. We need to get at the disease.

My next blog

I fear my next blog will not be a happy one either.  Yesterday we had great news about LIBE agreeing to take the item on 16th November and that remains the case. But other things have happened  today. Watch this space. It ain’t over ’til it’s over.

Posted in Child abuse images, Default settings, Facebook, Privacy, Regulation, Self-regulation

Nuremberg and the internet

Many people who read “East West Street” by Philippe Sands QC, may have been surprised to learn it was the horrors of the Second World War which propelled the international community – as represented by politicians, mainly elected ones – to come together and formulate a set of magnificent  documents which would constitute the core of what we now recognise as international “human rights law”.

The Charter of the United Nations was adopted in October 1945.  The Universal Declaration of Human Rights in 1948. Many human rights instruments which emerged in the ensuing years can be traced to these two seminal, post-war moments and arguments heard or developed at the Nuremberg Trials.

The UN Convention on the Rights of the Child

Beginning in 1979 the Polish Government initiated the processes which, in 1989, led to the adoption of the United Nations Convention on the Rights of the Child (UNCRC). 

What do all of the above have in common? They predate the internet and the massive availability of digital technologies.  In astonishing ways which would have been hard to predict even twenty years ago, never mind in 1948, digital technologies have changed the way we live.

In the case of the UNCRC, the language used is so out of step with the contemporary realities of children’s lives a “General Comment” has been commissioned to act as an aid to interpretation, specifically in respect of the digital environment. You have until 15th November to make your views known.

The General Comment is not going to change any of the words or principles set out in the UNCRC. There is no need for that. As with the Universal Declaration of Human Rights, the values it enshrines are eternal. Or ought to be. But, as with the UNCRC, so also with the Universal Declaration and similar. We have to start adjusting how we approach matters in a way which is consonant with the digital age. Some of the habits and ways of thinking developed in the analogue era are now obsolete or obsolescent.

There is nothing new under the sun

There has always been crime. There have always been threats to children, the weak, the gullible, the ill-educated or illiterate. Threats to national security and democratic processes are not entirely novel. But the speed, scale, complexity, and the international dimension to the kind of  behaviours the internet has facilitated have created enormous difficulties yet to be solved. They will not be solved by people who believe this nonsense:

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”

It was singularly apt that this “Declaration of the Independence of Cyberspace” was made, in 1996,  on or around the tenth anniversary of and perfectly reflecting Ronald Reagan’s immortal contribution to political thought.  “The nine most terrifying words in the English language are: ‘I’m from the Government, and I’m here to help’.”  And where was this utterly up-itself Reaganist utterance made? Davos. Where else?

Governments are a long way from being perfect instruments of, well, almost anything, but they are all that the vast majority of people have or can turn to when faced with overwhelming or complex threats to the commonwealth.

The highly educated, tech savvy activists will always or at any rate generally be able to look after themselves in cyberspace.  Governments are for the rest of us. The challenge here is, through the ballot box and our own engagement with political processes, to make those processes better not give up on them by ceding territory to the geeks. Elections are our shareholder meetings where nobody has a super veto.

The tide is turning

In every democratic country in the world the tide is turning. In the USA there is EARN IT. Section 230 has been trimmed back and will be trimmed further. In the EU the Digital Services Act hoves into view.  In the UK the Online Harms White Paper will soon be upon us. Look at Germany, France, Australia and many other places. Why now?

 

Knowledge of the internet is being democratised 

Historically, too few judges, politicians, policy makers, mainstream journalists and community activists had a good understanding of the internet or the underpinning technology. It emerged so fast. We were awe struck and dazzled.   To quote Arthur C Clarke, this stuff really did look like “magic”. We fell for the Silicon Valley schtick.

The techie magicians might have worn jeans and T-shirts, but we now know that was only to hide the suits as the early idealism was smothered by Wall Street.

Knowledge of the internet has been democratised by our experience of it. People are no longer intimidated by the jargon. Democracy trumps technocracy when it comes to social policy and  we all now know the social consequences of tech matter. Hugely.

Do I have blind faith in all political institutions and the police and security services which are meant to serve them? Of course not. Only an idiot would think that.  Look at Snowden and Echelon.

Quis custodiet ipsos custodes?

This is a question that is almost as old as the hills. All public institutions and Big Tech must be bound by laws and we must develop effective, independent transparency regimes to ensure those laws are being routinely kept not routinely broken. But equally we must not cut off our noses to spite our faces until we reach that happy point.

 

 

Posted in Child abuse images, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation

I am not going to say “I told you so”

I generally find it extremely irritating when people turn to me and, usually with a smug look, say “I told you so,” so that won’t happen here. With little additional comment I will merely draw your attention to a report which was released in the USA last month.

First point: it was produced by a body called the “Coalition for a Secure and Transparent Internet”. Its mission is to“advocate before U.S. and EU policymakers, ICANN, registrars, registries, and other stakeholders about the importance of open access to WHOIS data.”

Slightly surprised the word “accurate” does not appear between “to” and “WHOIS” but for most sensible people I guess that would be implied.

Congressman Robert Latta asked several US Federal Agencies for their views on the state of play with WHOIS, referring specifically to the current Covid crisis. This, inevitably, raised broader issues.

In September CTSI published the replies the Congressman had received. Below are a few choice extracts.

From the Food and Drug Administration

“Access to WHOIS information has been a critical aspect of FDA’s mission to protect
public health. Implementation of the E.U. General Data Protection Regulation (GDPR)
has had a detrimental impact on FDA’s ability to pursue advisory and enforcement
actions as well as civil and criminal relief in our efforts to protect consumers and patients.”

From the Federal Trade Commission

You also highlighted your concerns that the implementation of the
European Union’s General Data Protection Regulation (“GDPR”) has negatively affected the ability of law enforcement to identify bad actors online. I share your concerns about the impact of COVID-19 related fraud on consumers, as well as the availability of accurate domain name registration information.”

From Homeland Security

“HSI views WHOIS information, and the accessibility to it, as critical information required to advance HSI criminal investigations, including COVID-19 fraud. Since the implementation of GDPR, HSI has recognized the lack of availability to complete WHOIS data as a significant issue that will continue to grow. If HSI had increased and timely access to registrant data, the agency would have a quicker response to criminal activity incidents and have better success in the investigative process before criminals move their activity to a different domain.”

From the Department of Justice/FBI

“…greater WHOIS access for law enforcement would increase the effectiveness
of… investigations by identifying illicit activity in specific areas, and would assist in
disrupting and dismantling criminal organizations.”

How did we ever get to this?

That is an excellent question. I’m glad someone asked it.

I agreed about the need for ICANN to be given complete independence from the US Federal Government. But the Obama Administration handed over control without dotting the i’s and crossing the t’s. They left ICANN with the ability to abandon or substantially modify their historic mission, at least in respect of WHOIS.

Once free of a potential corrective intervention by the US Federal Governmenet ICANN became ever more obviously a trade association, a racket.

The public interest always comes second to Registrars’, Registries’ and their symbiotic co-dependent’s (ICANN’s) financial interests.

ICANN has weakened WHOIS, not strengthened it. They have reduced the obligations to ensure WHOIS data are accurate and that also means up to date. Link that with other real world developments about how the internet is being managed, and by whom, and anyone with two brain cells can see the future. But that won’t stop the Registrars, Registries and ICANN from dragging things out for as long as possible. Delay for them is the same as money. And money is what it is all about.

Could the US Government reverse its decision and take ICANN back under its wing? Probably not, but if it were shown ICANN acted in bad faith from the get-go, with no serious intention ever to fulfill or keep to the terms of the “Affirmation of Commitments”…. then what?

Who was asleep at which wheel?

The EU must take its share of the blame for what happened next, at least insofar as it concerns WHOIS.

In the four years or more between the draft GDPR being published and it being adopted as a final, legal instrument, none of the following words were uttered, never mind discussed, anywhere at any time in Brussels, at least not in any public meetings where minutes were taken and later published. Those words were: ICANN, Registrars, Registries, Registrants and WHOIS.

It’s not that the EU took its eye off the ball. They never had their eye on it. It was only after the event that officials went in to bat to limit the damage once the scale of ICANN’s impudent ambition became apparent. Why was it necessary for them to do that? Because ICANN had adopted an interpretation of GDPR rules which would never have been possible if those rules had been properly drawn up in the first place. And that interprtetation is the reason for those comments shown above.

Finally, here is the other nagging question. If EU bureacrats were not over-familiar with ICANN’s quaint ways and hidden intentions. If they had been lobbied, seduced, hoodwinked or neutralized by the hype, where were the cops and the governments?

A perfect smokescreen

Mainstream media journalists’ eyes glaze over at the first mention of ICANN’s recondite terminology. They shy away when they hear about the glacial pace at which things happen in obscure, acronym-heavy sub-committees. That creates a perfect smokescreen.

Nobody comes out of this covered in glory, other than the Registrars, Registries and their servants the ICANN bureaucracy. They got exactly what they wanted. Perhaps“glory” is the wrong word here?

A friend of mine who was once utterly immersed in ICANN and similar bodies, e.g. the IGF, reflected how, in the early days, there was a group of high-minded, public spirited people who flew around the world convinced their personal engagement with this still relatively “new thing”, the internet, and the participatory bodies which it was spawning e.g. ICANN and the IGF, was truly going to reshape that world and make it a better place. “Noblesse oblige”. Then they woke up and realised they’d been had.

Posted in Uncategorized

In Parliament

On Wednesday in a “Westminster Hall “debate MPs discussed the seemingly ever-upcoming Online Harms Bill. The fact that this debate happened at all was down to the energetic engagement of Holly Lynch the Member of Parliament for Halifax, West Yorkshire. Lynch opened and closed the debate with great skill and aplomb. I’d say she’s one to watch for the future. Halifax is lucky to have her.

The debate provided MPs from all political parties an opportunity to voice the concerns of their constituents and discuss the causes they support. As is customary, the Government sent the relevant Minister to listen and respond. MPs from the Labour, Conservative, Scottish National and Democratic Unionist parties spoke. There was a surprising degree of unanimity. But there again, maybe it wasn’t so surprising.

Wails and lamentations

Everyone lamented the delay in publishing the Government’s final response to the consultation on Online Harms. The Minister said a document will be released before the end of this calendar year with a Bill to follow early in the New Year. Nothing new there then. No obvious sense of urgency.

Neither did we hear definitively whether the Bill will be subject to pre-legislative scrutiny by a Committee of both Houses of Parliament. There was a suggestion it might even be 2024 before some or all parts of the legislation become operative. See above.

Bear in mind the Green Paper that started off this whole process was first published in October 2017. Seven years is a long time in the life of a child and it is a whole generation of young children.

Support for age verification remains undimmed. Apparently.

The Government once again reiterated its support for age verification for pornography sites, insisting they want to bring social media within scope. There were references to a major research project the Government is supporting which is designed to produce reliable “age assurance” technologies. This has been mentioned before but perhaps not at such length.

The implication is we may soon see tools being released which allow for the age of children below 18 to be confirmed with a high degree of certainty. This could open up a whole new chapter in online child protection.

The changing and challenging politics of Westminster

What is clear from Wednesday’s debate and from the evolving political landscape in the UK, is Government backbenchers no longer see their frontbenchers, and in particular their Prime Minister, as a safe pair of hands or an infallible demi-god who will always deliver victory on everything, forever. Blame Covid and Brexit.

The sheen of impregnability has gone. Ministers can no longer take it for granted they can get a majority for any old rubbish the (so-called) libertarians in No 10 or scaredy-cats elsewhere in Whitehall might want to throw at them.

The mood of backbench Tory MPs matches well with the mood of MPs across the House. They want measures that will force Big Tech to do a far better job, both generally and in particular when it comes to children’s rights and the protection of children. The only way to achieve that is through laws with teeth. Whatever trust in tech might have been knocking about has been scattered to the winds by their highly visible and repeated failures. Hiring smart lawyers and lobbyists isn’t going to change that. If anything it will only heighten politicians’ determination to “get regulation done“, to coin a phrase.

Danger and opportunity in the air

It is clear that with this mood of tech militancy there is danger in the air. Some might see it as an opportunity. When a Bill finally appears in either House, unless it is up to snuff just about anything could happen. It will be a brave MP, Peer or Minister, who stands up and says “steady on, let’s not be too hard on these groovy Californians“. Who will push back or speak up for tech interests? Only themselves and a handful of marginal bodies. Think Tanks, research bodies and academics who have been significantly funded by or are or have been close to tech will need to tread with care lest their otherwise sensible insights get drowned out in accusations they have been bought and paid for.

Here is a simple statement of fact. Whatever else is might also be the internet and its associated technologies, including the devices which can be used to connect to it, are now firmly within the consumer and family market. All parts of the internet value chain have to start acting as if they unreservedly accept that. The Wild West days are well and truly over.

In the context of the internet and tech, children’s and families’ interests can no longer be discussed as if they inevitably pose a threat to free speech or political rights, either in this country or any other. I am all in favour of “striking a balance” in this as in all things, but up to now, as far as I can see, that means children’s rights and interests get overlooked or put at the back of the queue. Enough already.

Posted in Uncategorized

Kids can’t pay for the truth

In many countries advertising revenues were vital in helping keep “old-fashioned” newspapers and other types of journals alive, particularly smaller, local ones. Typically these would be in printed form but they all soon had an online counterpart.

In addition there was a vast array of smaller or specialist publications and magazines which, in varying degrees, also depended on advertising revenues.

The people employed to write for or edit the above, by and large, had learnt the trade of journalism. The importance of checking facts was dinged into them and they were bound by a code of professional ethics, reinforced by laws about liabilities.

Of course there were failures, sometimes spectacular ones, and there were always issues around how to select, interpret and present “facts”.  Typically, any bias correlated either with the individual author’s views or the owner’s interests. Minority opinions would often struggle to get an airing or a fair hearing.

What was NOT easy to find

Yet for all of its many and obvious failings, under the muddled ancien regime barefaced lies, straightforwardly insane or calculatedly manipulative explanations of  world events were NOT that easy to find, certainly not on any large scale, or via any easily accessible, readily available outlets. Self-correcting mechanisms were in place. You had to hunt for the dark side and that alone tended to keep the numbers and the level of interest down.

But look where we are now. Platforms which have starved journalism of an important part of its lifeblood, advertising revenues, have now become major promoters, conduits, providers, call it what you will, of the exact opposite of what good journalism is about. And societies all over the world are hurting because of it. In several ways.

If the internet was just a large seminar room

If the internet was just a large  University seminar room, none of this would matter, or at least not very much.

But the internet is not a seminar room. Misinformation spread to serve a specific project has huge real world effects and rarely are these pretty.  On the contrary they pose a direct threat to liberal values and democratic institutions. Global warming deniers and anti-vaxxers threaten human life itself.

Nobody should refuse to take sides

Nobody should refuse to take sides in this debate, particularly if our children risk being gulled into becoming pawns or spear carriers for incendiary, hate-filled rabble rousers  carried along by destructive ignorance.

Specifically, tech companies’ pervasiveness in the modern world means they cannot claim to be innocent ingénues, bystanders with minimal or no interest in the outcome.

Myopic Utopianism is not the answer

Saying the answer to bad speech is more speech is the kind of myopic Utopianism that was partly responsible for getting us into this mess in the first place. The answer to bad speech is don’t give it a megaphone. Apologising afterwards just won’t do.

It’s easy to state the problem. Not so easy to come up with solutions if your company’s income depends, not upon the truth or any recognisable version of it, rather it depends on something other than truth.

The Mel Gibson School of Philosophy

Silicon Valley pulled off a remarkable trick when they managed to convince so many of us that the absence of regulation was a synonym for  “freedom” therefore any attempt to regulate them was an attack on “freedom”.  I think Mel Gibson must have been their philosophical reference point. “Freedom” in this case was really a synonym for the ability to make money. In that respect they succeeded brilliantly.

Get “digitally literate”. Really? 

We are now being told to chill. Digital literacy is the answer.

Who could be against digital literacy? I’m not.  It should be encouraged to the greatest extent possible. But it is sort of dragging us back to the idea that the internet is a seminar room. If we are all just well educated enough virtue will triumph, evil will fail. Er, no.

The digital literacy schtick shifts the responsibility back to us to get ourselves  up to speed so as to negate or nullify the very things the platforms are doing.

For adults there is a stronger case for this. But for children?

Or pay for quality journalism

Alternatively we are told to chill for a different reason.

Good journalism is not dead. You just have to pay. Where does that leave kids and the poor? Some of the subscriptions are substantial. I know. I have several.

Countries which have public service broadcasters not dependent on advertising revenues e.g. the BBC in the UK,  are very fortunate but money is tight and  they are under constant attack from commercial interests who would like to see them dead and buried or at any rate reduced in size and reach.

Public service and other broadcasters and publishers are having to compete against a variety of platforms not bound by their code of ethics. These platforms are not even bound by the same laws. They enjoy massive immunities.

And worse, they think nothing of cannibalizing’s other people’s output, providing it for “free” while they, not the originator, pull in even more advertising dollars off the back of it, in turn making it harder…..you get the picture. This is one of the reasons why the authorities in Australia are trying to find a way to get the big platforms to pay.

Misinformation/disinformation/fake news is a child protection concern

The Online Harms legislation will begin in the UK Parliament soon (we hope). The EU’s Digital Services Act is beginning its journey through the EU institutions. This question of misinformation/disinformation is clearly going to be important to several interests.  Children’s organizations will be making the case that it is very much a child protection concern as well.

Posted in Default settings, E-commerce, Privacy, Regulation, Self-regulation

Let’s not make TWO mistakes

Nobody had spotted it. The European Commission openly acknowledged an error had been made. If left uncorrected it would bring to an end measures which have been protecting kids since 2009.

On 10th September the Commission published a proposal.  It describes the problem and, pending the development of a permanent or longer term solution, suggests the status quo is preserved at least until 2025. Phew! That’ll do. Disaster averted.

What was the mistake?

If the mistake is not rectified, when the European Electronic Communications Code comes into effect on 20th December this year, it will become illegal for a range of online businesses operating within the EU to continue or begin using automated proactive tools to try to detect child grooming behaviour, use PhotoDNA or similar to identify hashes of known child sex abuse still images or videos, or use classifiers to spot images likely to contain child sex abuse material so they can be sent for human review.

How well have these sorts of tools been working up to now? Absolute proof is impossible but to get some insight just look at the last annual report of the USA’s hotline(NCMEC).

According to NCMEC, in 2019 16.9 million child sex abuse images were reported to them and deleted. 99% were discovered as a result of the use of the kind of tools that are now otherwise under threat.

Heavy weather

At the LIBE Committee earlier this week the Commission’s  remedial proposal ran into some heavy weather  (the relevant part of the video starts at 10.36).

Yet several of the points some of the MEPs made at the Committee meeting were perfectly reasonable. It is earnestly to be hoped Commission officials and MEPs can work something out that will also meet with the approval of the Council of Ministers.

A failure to put this right, now we can see it full on, would not be an oversight or a second mistake. It would be something far, far worse.

A ban on innovation to protect children?

If I have any criticism of the Commission’s proposal it is because it seems to be suggesting only child protection technologies currently in use and well-established will be covered and therefore allowed. Presumably somebody in Brussels is drawing up a list?

This looks perilously close to saying innovating to protect children is being made illegal. One imagines updates and fixes will be allowed so this could easily get extremely messy.

Would it not be better and simpler to describe technology neutral general principles governing the use of proactive, online child protection tools? Providing any new tools that might come along conform with those principles, they are in the clear.

Lack of trust and transparency

Obviously there is not a single MEP who wants to help sexual predators to groom children. Neither is any MEP unconcerned about the circulation of child sex abuse images.

Thus, the discontent being expressed at the LIBE Committee meeting principally was an echo of the lack of trust in tech companies.  This is something European institutions could and should have addressed before now, but the fact that they have not done so should not lead to children  having to pay the price.

One MEP mentioned the possibility that companies currently proactively looking for illegal images or grooming behaviour might be deliberately acquiring data to use for commercial purposes.

The fact is the Commission’s proposal expressly states such behaviour would be illegal, as it would also be under the GDPR, so once again we are back to the lack of trust which in turn is rooted in zero transparency.

This is  one of the key aspects of the reforms to internet regulation to be addressed in the planned Digital Services Act and explains why the Commission describes the 10th September proposal as being only for the interim.

Posted in Default settings, E-commerce, Facebook, Google, Regulation, Self-regulation

Children’s groups speak out

The EU held a consultation on the upcoming Digital Services Act. It closed yesterday. Here is a link to the document I submitted with the support of one or more children’s groups from 15 Member States. What with the holiday period, Covid and the relatively short turnaround time, that’s not a bad showing. The processes that will now follow will likely carry on for some time and in the months (years?) ahead I hope we can build on that level of  engagement. It is vital that we do.

The year 2000 is ancient history

The decision-makers in Brussels-Strasbourg must understand that,  as compared with 2000 when they adopted the first set of ground rules for the internet, in the form of the e-Commerce Directive, the internet has changed almost beyond recognition.  Now one in five of all internet users in the EU is a child.

Children and families are therefore a major and persistent presence. They can no longer be treated as an irritating, trivial concern in a larger and more important or nobler struggle against, well, against all manner of societal and political evils. Children need to move from afterthought to always-thought in cyber policy making.

The five key recommendations

If you look at the document you will see it directs policy-makers attention to five major suggestions

  1. Establish a duty of care
  2. Create a meaningful, independent transparency regime
  3. Revisit the GDPR through the lens of children
  4. Closely scrutinise the operation of the AVMSD
  5. Improve the co-ordination and management of policy-making processes affecting kids

There is a separate paper, which was not submitted as part of the formal response. It acknowledges that the EU has been a major world leader in online child protection but it also details where it has not always got it right. I call it the “Consequences” document.

Posted in Age verification, Child abuse images, Consent, Default settings, E-commerce, Internet governance, Pornography, Privacy, Regulation, Self-regulation