Don’t be a child in Europe

Yesterday the European Data Protection Supervisor (EDPS) published an opinion on the European Commission’s proposal for a temporary suspension of parts of the e-Privacy Directive of 2002. It is a weak Opinion, riddled with error. The good points the EDPS makes are dwarfed and completely overshadowed by the bad.

A rebuke

A major part of the Opinion, in essence, is a rebuke of European Institutions for not doing things in precisely the right order, in exactly the right way at the right time.   The Opinion shows an abundance of bureaucratic correctness which entirely misses the human heart of the issues at stake, as well as important parts of the law.

Everywhere else, in every legal instrument I have ever read, including the GDPR, we are told children require special care and attention. Why? Because they are children. The EDPS affords them no such considerations. 

Article 24 of the Charter of Fundamental Rights

The EDPS makes no reference to the explicit language of the EU’s Charter of Fundamental Rights. Nada.  Pas un mot. As an aide-memoire I repeat the key words here:

The rights of the child

  1. Children shall have the right to such protection and care as is necessary for their well-being…..
  2. In all actions relating to children, whether taken by public authorities or private institutions, the child’s best interests must be a primary consideration.

The EDPS never once even mentions the rights of children. If there is a balance to be struck he shows no signs of knowing how to locate the fulcrum.

A child’s right to privacy? Not mentioned

Search the document high and low. There’s nothing there. No mention of the legal right to privacy of a child who has been raped where pictures of the rape have been distributed for the whole world and her classmates to see. Not one word.

A child’s right to human dignity? Not mentioned

Neither is there any mention of a child’s legal right to human dignity which, in this case, entails getting the images of their humiliation off the internet, away from public view, to the greatest extent possible, as fast as possible. Not one word. 

The EDPS misunderstands the technologies

The technologies being debated do not understand the content of communications. They work in an extremely narrow and specific way.

If I go to a zoo wearing spectacles that only allow me to see zebras, the giraffes, lions and penguins will be invisible to me. They may pass in front of my unseeing eyes, but they might as well not be there. All I see are zebras.

This is how PhotoDNA works.  The EDPS is therefore simply, factually wrong when (page 2 and paras 9 and 52) he suggests there is any

“monitoring and analysis of the content of communications”

PhotoDNA only sees the zebras. In this case the zebras are the already known images of a child being sexually abused. That is to say an image that should not be there in the first place, which nobody has any right to possess never mind publish or distribute.

And the other child protection tools work in similar ways. They do not “analyse” the content of a communication. They cannot say what the picture is about or what a conversation is about. They can only say whether the communication contains known signals of harm or known signals of an intention to harm a child.

Do we really want companies to be indifferent and inert?

Does the EDPS want companies wilfully and knowingly to blind themselves to heinous crimes against children? Is he suggesting they should be indifferent to and inert towards what they are facilitating on their platforms?

A resolution of the European Parliament says otherwise

Law enforcement agencies have repeatedly stated it is completely beyond them to address these issues alone. They rely and depend on tech companies doing their bit, a fact recognised by the European Parliament less than a year ago.  On 29th November 2019 in a resolution  at para 16 we see the following:

“Acknowledges that law enforcement authorities are confronted with an unprecedented spike in reports of child sexual abuse material (CSAM) online and face enormous challenges when it comes to managing their workload as they focus their efforts on imagery depicting the youngest, most vulnerable victims; stresses the need for more investment, in particular from industry and the private sector, in research and development and new technologies designed to detect CSAM online and expedite takedown and removal procedures;”

How do scanning tools work?

The EDPS makes no reference to other types of scanning taking place on an extremely large scale, such as for cyber security purposes.  At a webinar organized by the Child Rights Intergroup on 15th October Professor Hany Farid made the following observations (at 24.28):

“If you don’t think that PhotoDNA and anti-grooming have a place on technology platforms then I ask you to do the following: turn off your spam filter, turn off your cybersecurity that protects from viruses, malware and ransomware because that is the same technology. And if you believe that we should use a spam filter and if you believe that you should protect your computer from viruses and malware, which I think you do, and if you believe that that technology has a role to protect this computer right here, then why shouldn’t these technologies protect children around the world? At the end of the day it is exactly the same technology, simply tackling a different problem.”

No mention of Microsoft’s Affidavit

On 14th October Microsoft published a sworn Affidavit in which the following words appear at para 8:

“PhotoDNA robust hash-matching was developed for the sole and exclusive purpose of detecting duplicates of known, illegal imagery of child sexual exploitation and abuse, and it is used at Microsoft only for these purposes.”  

At a LIBE Committee meeting it was suggested that companies were scanning content, ostensibly looking for illegal content then processing the data they collect for commercial purposes. Leaving aside the fact that this would be illegal anyway, the Microsoft Affidavit, under acknowledged pain of perjury, expressly states that is not happening.

Microsoft also published the terms of its licence which gives other companies and organizations permission to use PhotoDNA.

The EDPS makes no reference to the Affidavit. If it would help preserve the use of online child protection tools, surely other companies would be willing to swear similar Affidavits? Such Affidavits could remain in force at least until this matter is resolved, and even beyond if necessary.

The EDPS says he is worried about precedents

The EDPS says (para 53):

“The issues at stake are not specific to the fight against child abuse but to any initiative aiming at collaboration of the private sector for law enforcement purposes.” (emphasis added).

Here the EDPS abandons lawyer’s clothes and dons those of a (not very skilful) politician or campaigner.

This is the notorious “slippery slope” argument. It is morally and intellectually bankrupt.  A demagogue’s trick. A sleight of hand.

The unnamed terror

What is the unnamed terror the EDPS is worrying about?  We are not told. Isn’t the position clear? The proposed suspension is entirely and only about the protection of children. Nothing else. Nothing that isn’t written in the document.

It is quite wrong and legally completely incorrect, to plead a concern for something that is not on the table, not in anyone’s line of sight.

If something comes up in the future deal with it on its merits.  If you agree with it say “yes”. If you don’t, say “no”.  Lawyers are meant to be able to distinguish between cases based on the facts.

Punishing children for other people’s mistakes

I have no brief to defend the Commission, much less the history of events leading up to  their proposal. But whatever the history, it is completely unacceptable to allow the tools to become illegal on 20th December only because nobody managed to sort this out to the satisfaction of the EDPS before now. 

That amounts to intentionally putting children in danger, punishing them for the past failures of others, adults who should have known better and acted differently sooner. Shame, shame.

Don’t be a child in Europe

Next week at the LIBE Committee meeting, if Members of the European Parliament are persuaded by the EDPS report, if it is ultimately reflected in the decision of the upcoming Trialogue and the tools are outlawed,  my advice is clear: “don’t be a child in Europe.”

Be a child somewhere else.

Posted in Child abuse images, Privacy, Regulation, Self-regulation, Uncategorized | 2 Comments

Joy tinged with anger

At 5.00.a.m. today the Head of Instagram published a blog entitled “An important step towards better protecting our community in Europe”

There is much that is important and of interest in Facebook’s blog so please read it but here, for me, are the key sections:

“We use technology to help.. proactively find and remove..suicide and self-harm content…Between April and June this year, over 90% of the suicide and self-harm content we took action on was found by our own technology before anyone reported it to us. But our goal is to get that number as close as we possibly can to 100%. 

Until now, we’ve only been able to use this technology to find suicide and self-harm content outside the European Union. 

European children deprived of protection

So children and young people everywhere else in the world have been benefitting from Facebook’s deployment of proactive tools which help stop young people killing or harming themselves. Children in Europe haven’t been. Why?  To answer that we have to look to the Irish Data Protection Commissioner (DPC).

Seemingly, having started monitoring this type of content in 2017,  Facebook raised the matter with the DPC back in March 2019.  The DPC “strongly cautioned Facebook because of both privacy concerns and a lack of engagement with public health authorities in Europe on the initiative.”

Facebook followed the DPC’s advice and consulted with health authorities. Nevertheless   the DPC still said “concerns remain regarding the wider use of the tool to profile users.. culminating in human review and potential alerts to emergency services”.

You might want to read that again. It’s hard to believe  anyone could be anxious about the possibility an ambulance or a police officer could go knocking on a door in the expectation of saving a life and for that to be frowned on or obstructed. Certainly in the UK we are constantly told to contact the emergency services if we have any reason at all to suspect someone is in danger, particulary if that someone is a child.

Just to remind you, in the GDPR and in every legal instrument I know, the position of children is said to require extra care and attention. Yet  it is starting to feel that whenever a traditional privacy lawyer writes or drafts something things end up all wrong. Go figure.

And by the way there are no issues of principle associated with Facebook sending a message to the police or the ambulance service if someone has made an individual, manual report to them about a person they believe is at risk.  It is only if the tools are deployed proactively, at scale, that the DPC gets agitated.

So a malicious  or mischievous report gets acted on, while a genuine one can’t be found by a machine. Where’s the logic in that?

Have we taken leave of our collective senses?

Could the tragic death of Molly Russell have been avoided if these tools had existed then? Who can say? But equally I am certain I will not be alone in wondering what kind of world we are creating if, in the name of privacy, we allow these things to happen when we had the possibility of stopping or reducing them.

We have been content to allow the internet to do things that not many years ago would have seemed utterly unbelieveable. Saving children’s lives? That’s where we draw a line?

Emotional? Too right it’s emotional

I have heard it said that we shouldn’t be too emotional about these questions. Excuse me. What that is actually saying is we should detach ourselves from our humanity. It hardly matters to me what impact technology might have on a lump of concrete or other inanimate object but, if you have it within your power to stop pain, death or suffering by another human being, only a dessicated robot could turn away and say “no”.

The technology that has built huge fortunes for entrepeneurs  and pays vast salaries to its employees who know the colour of your socks, where you go on holiday and what you eat for breakfast cannot be turned to saving lives? I understand about “balance”  and “safeguards” but whenever I hear those words what I am usually hearing is “no” again.

It’s not about privacy. It’s about trust

The mantra of the internet has been about innovation and the wonderful benefits technological advances can produce.

So now technology allows us to detect when a child is contemplating killing themselves.  We have technology which allows us to detect when a paedophile is attempting to groom a child. We have technology which can help protect the privacy rights of children who have been raped and further humiliated by having images of their rape broadcast to the world.

Why would we not use them?

Because some people do not trust Big Tech to use these tools lawfully i.e. in ways which do not exploit people’s  data in a manner that the law anyway already forbids.

The real answer, therefore, is to address the lack of trust in Big Tech. And that means addressing transparency. And the fact that our politicians and institutions have so far failed to do this is no reason, now, to make those tools illegal. That is treating a symptom not the disease. We need to get at the disease.

My next blog

I fear my next blog will not be a happy one either.  Yesterday we had great news about LIBE agreeing to take the item on 16th November and that remains the case. But other things have happened  today. Watch this space. It ain’t over ’til it’s over.

Posted in Child abuse images, Default settings, Facebook, Privacy, Regulation, Self-regulation | Leave a comment

Nuremberg and the internet

Many people who read “East West Street” by Philippe Sands QC, may have been surprised to learn it was the horrors of the Second World War which propelled the international community – as represented by politicians, mainly elected ones – to come together and formulate a set of magnificent  documents which would constitute the core of what we now recognise as international “human rights law”.

The Charter of the United Nations was adopted in October 1945.  The Universal Declaration of Human Rights in 1948. Many human rights instruments which emerged in the ensuing years can be traced to these two seminal, post-war moments and arguments heard or developed at the Nuremberg Trials.

The UN Convention on the Rights of the Child

Beginning in 1979 the Polish Government initiated the processes which, in 1989, led to the adoption of the United Nations Convention on the Rights of the Child (UNCRC). 

What do all of the above have in common? They predate the internet and the massive availability of digital technologies.  In astonishing ways which would have been hard to predict even twenty years ago, never mind in 1948, digital technologies have changed the way we live.

In the case of the UNCRC, the language used is so out of step with the contemporary realities of children’s lives a “General Comment” has been commissioned to act as an aid to interpretation, specifically in respect of the digital environment. You have until 15th November to make your views known.

The General Comment is not going to change any of the words or principles set out in the UNCRC. There is no need for that. As with the Universal Declaration of Human Rights, the values it enshrines are eternal. Or ought to be. But, as with the UNCRC, so also with the Universal Declaration and similar. We have to start adjusting how we approach matters in a way which is consonant with the digital age. Some of the habits and ways of thinking developed in the analogue era are now obsolete or obsolescent.

There is nothing new under the sun

There has always been crime. There have always been threats to children, the weak, the gullible, the ill-educated or illiterate. Threats to national security and democratic processes are not entirely novel. But the speed, scale, complexity, and the international dimension to the kind of  behaviours the internet has facilitated have created enormous difficulties yet to be solved. They will not be solved by people who believe this nonsense:

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”

It was singularly apt that this “Declaration of the Independence of Cyberspace” was made, in 1996,  on or around the tenth anniversary of and perfectly reflecting Ronald Reagan’s immortal contribution to political thought.  “The nine most terrifying words in the English language are: ‘I’m from the Government, and I’m here to help’.”  And where was this utterly up-itself Reaganist utterance made? Davos. Where else?

Governments are a long way from being perfect instruments of, well, almost anything, but they are all that the vast majority of people have or can turn to when faced with overwhelming or complex threats to the commonwealth.

The highly educated, tech savvy activists will always or at any rate generally be able to look after themselves in cyberspace.  Governments are for the rest of us. The challenge here is, through the ballot box and our own engagement with political processes, to make those processes better not give up on them by ceding territory to the geeks. Elections are our shareholder meetings where nobody has a super veto.

The tide is turning

In every democratic country in the world the tide is turning. In the USA there is EARN IT. Section 230 has been trimmed back and will be trimmed further. In the EU the Digital Services Act hoves into view.  In the UK the Online Harms White Paper will soon be upon us. Look at Germany, France, Australia and many other places. Why now?


Knowledge of the internet is being democratised 

Historically, too few judges, politicians, policy makers, mainstream journalists and community activists had a good understanding of the internet or the underpinning technology. It emerged so fast. We were awe struck and dazzled.   To quote Arthur C Clarke, this stuff really did look like “magic”. We fell for the Silicon Valley schtick.

The techie magicians might have worn jeans and T-shirts, but we now know that was only to hide the suits as the early idealism was smothered by Wall Street.

Knowledge of the internet has been democratised by our experience of it. People are no longer intimidated by the jargon. Democracy trumps technocracy when it comes to social policy and  we all now know the social consequences of tech matter. Hugely.

Do I have blind faith in all political institutions and the police and security services which are meant to serve them? Of course not. Only an idiot would think that.  Look at Snowden and Echelon.

Quis custodiet ipsos custodes?

This is a question that is almost as old as the hills. All public institutions and Big Tech must be bound by laws and we must develop effective, independent transparency regimes to ensure those laws are being routinely kept not routinely broken. But equally we must not cut off our noses to spite our faces until we reach that happy point.



Posted in Child abuse images, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

I am not going to say “I told you so”

I generally find it extremely irritating when people turn to me and, usually with a smug look, say “I told you so,” so that won’t happen here. With little additional comment I will merely draw your attention to a report which was released in the USA last month.

First point: it was produced by a body called the “Coalition for a Secure and Transparent Internet”. Its mission is to“advocate before U.S. and EU policymakers, ICANN, registrars, registries, and other stakeholders about the importance of open access to WHOIS data.”

Slightly surprised the word “accurate” does not appear between “to” and “WHOIS” but for most sensible people I guess that would be implied.

Congressman Robert Latta asked several US Federal Agencies for their views on the state of play with WHOIS, referring specifically to the current Covid crisis. This, inevitably, raised broader issues.

In September CTSI published the replies the Congressman had received. Below are a few choice extracts.

From the Food and Drug Administration

“Access to WHOIS information has been a critical aspect of FDA’s mission to protect
public health. Implementation of the E.U. General Data Protection Regulation (GDPR)
has had a detrimental impact on FDA’s ability to pursue advisory and enforcement
actions as well as civil and criminal relief in our efforts to protect consumers and patients.”

From the Federal Trade Commission

You also highlighted your concerns that the implementation of the
European Union’s General Data Protection Regulation (“GDPR”) has negatively affected the ability of law enforcement to identify bad actors online. I share your concerns about the impact of COVID-19 related fraud on consumers, as well as the availability of accurate domain name registration information.”

From Homeland Security

“HSI views WHOIS information, and the accessibility to it, as critical information required to advance HSI criminal investigations, including COVID-19 fraud. Since the implementation of GDPR, HSI has recognized the lack of availability to complete WHOIS data as a significant issue that will continue to grow. If HSI had increased and timely access to registrant data, the agency would have a quicker response to criminal activity incidents and have better success in the investigative process before criminals move their activity to a different domain.”

From the Department of Justice/FBI

“…greater WHOIS access for law enforcement would increase the effectiveness
of… investigations by identifying illicit activity in specific areas, and would assist in
disrupting and dismantling criminal organizations.”

How did we ever get to this?

That is an excellent question. I’m glad someone asked it.

I agreed about the need for ICANN to be given complete independence from the US Federal Government. But the Obama Administration handed over control without dotting the i’s and crossing the t’s. They left ICANN with the ability to abandon or substantially modify their historic mission, at least in respect of WHOIS.

Once free of a potential corrective intervention by the US Federal Governmenet ICANN became ever more obviously a trade association, a racket.

The public interest always comes second to Registrars’, Registries’ and their symbiotic co-dependent’s (ICANN’s) financial interests.

ICANN has weakened WHOIS, not strengthened it. They have reduced the obligations to ensure WHOIS data are accurate and that also means up to date. Link that with other real world developments about how the internet is being managed, and by whom, and anyone with two brain cells can see the future. But that won’t stop the Registrars, Registries and ICANN from dragging things out for as long as possible. Delay for them is the same as money. And money is what it is all about.

Could the US Government reverse its decision and take ICANN back under its wing? Probably not, but if it were shown ICANN acted in bad faith from the get-go, with no serious intention ever to fulfill or keep to the terms of the “Affirmation of Commitments”…. then what?

Who was asleep at which wheel?

The EU must take its share of the blame for what happened next, at least insofar as it concerns WHOIS.

In the four years or more between the draft GDPR being published and it being adopted as a final, legal instrument, none of the following words were uttered, never mind discussed, anywhere at any time in Brussels, at least not in any public meetings where minutes were taken and later published. Those words were: ICANN, Registrars, Registries, Registrants and WHOIS.

It’s not that the EU took its eye off the ball. They never had their eye on it. It was only after the event that officials went in to bat to limit the damage once the scale of ICANN’s impudent ambition became apparent. Why was it necessary for them to do that? Because ICANN had adopted an interpretation of GDPR rules which would never have been possible if those rules had been properly drawn up in the first place. And that interprtetation is the reason for those comments shown above.

Finally, here is the other nagging question. If EU bureacrats were not over-familiar with ICANN’s quaint ways and hidden intentions. If they had been lobbied, seduced, hoodwinked or neutralized by the hype, where were the cops and the governments?

A perfect smokescreen

Mainstream media journalists’ eyes glaze over at the first mention of ICANN’s recondite terminology. They shy away when they hear about the glacial pace at which things happen in obscure, acronym-heavy sub-committees. That creates a perfect smokescreen.

Nobody comes out of this covered in glory, other than the Registrars, Registries and their servants the ICANN bureaucracy. They got exactly what they wanted. Perhaps“glory” is the wrong word here?

A friend of mine who was once utterly immersed in ICANN and similar bodies, e.g. the IGF, reflected how, in the early days, there was a group of high-minded, public spirited people who flew around the world convinced their personal engagement with this still relatively “new thing”, the internet, and the participatory bodies which it was spawning e.g. ICANN and the IGF, was truly going to reshape that world and make it a better place. “Noblesse oblige”. Then they woke up and realised they’d been had.

Posted in Uncategorized | Leave a comment

In Parliament

On Wednesday in a “Westminster Hall “debate MPs discussed the seemingly ever-upcoming Online Harms Bill. The fact that this debate happened at all was down to the energetic engagement of Holly Lynch the Member of Parliament for Halifax, West Yorkshire. Lynch opened and closed the debate with great skill and aplomb. I’d say she’s one to watch for the future. Halifax is lucky to have her.

The debate provided MPs from all political parties an opportunity to voice the concerns of their constituents and discuss the causes they support. As is customary, the Government sent the relevant Minister to listen and respond. MPs from the Labour, Conservative, Scottish National and Democratic Unionist parties spoke. There was a surprising degree of unanimity. But there again, maybe it wasn’t so surprising.

Wails and lamentations

Everyone lamented the delay in publishing the Government’s final response to the consultation on Online Harms. The Minister said a document will be released before the end of this calendar year with a Bill to follow early in the New Year. Nothing new there then. No obvious sense of urgency.

Neither did we hear definitively whether the Bill will be subject to pre-legislative scrutiny by a Committee of both Houses of Parliament. There was a suggestion it might even be 2024 before some or all parts of the legislation become operative. See above.

Bear in mind the Green Paper that started off this whole process was first published in October 2017. Seven years is a long time in the life of a child and it is a whole generation of young children.

Support for age verification remains undimmed. Apparently.

The Government once again reiterated its support for age verification for pornography sites, insisting they want to bring social media within scope. There were references to a major research project the Government is supporting which is designed to produce reliable “age assurance” technologies. This has been mentioned before but perhaps not at such length.

The implication is we may soon see tools being released which allow for the age of children below 18 to be confirmed with a high degree of certainty. This could open up a whole new chapter in online child protection.

The changing and challenging politics of Westminster

What is clear from Wednesday’s debate and from the evolving political landscape in the UK, is Government backbenchers no longer see their frontbenchers, and in particular their Prime Minister, as a safe pair of hands or an infallible demi-god who will always deliver victory on everything, forever. Blame Covid and Brexit.

The sheen of impregnability has gone. Ministers can no longer take it for granted they can get a majority for any old rubbish the (so-called) libertarians in No 10 or scaredy-cats elsewhere in Whitehall might want to throw at them.

The mood of backbench Tory MPs matches well with the mood of MPs across the House. They want measures that will force Big Tech to do a far better job, both generally and in particular when it comes to children’s rights and the protection of children. The only way to achieve that is through laws with teeth. Whatever trust in tech might have been knocking about has been scattered to the winds by their highly visible and repeated failures. Hiring smart lawyers and lobbyists isn’t going to change that. If anything it will only heighten politicians’ determination to “get regulation done“, to coin a phrase.

Danger and opportunity in the air

It is clear that with this mood of tech militancy there is danger in the air. Some might see it as an opportunity. When a Bill finally appears in either House, unless it is up to snuff just about anything could happen. It will be a brave MP, Peer or Minister, who stands up and says “steady on, let’s not be too hard on these groovy Californians“. Who will push back or speak up for tech interests? Only themselves and a handful of marginal bodies. Think Tanks, research bodies and academics who have been significantly funded by or are or have been close to tech will need to tread with care lest their otherwise sensible insights get drowned out in accusations they have been bought and paid for.

Here is a simple statement of fact. Whatever else is might also be the internet and its associated technologies, including the devices which can be used to connect to it, are now firmly within the consumer and family market. All parts of the internet value chain have to start acting as if they unreservedly accept that. The Wild West days are well and truly over.

In the context of the internet and tech, children’s and families’ interests can no longer be discussed as if they inevitably pose a threat to free speech or political rights, either in this country or any other. I am all in favour of “striking a balance” in this as in all things, but up to now, as far as I can see, that means children’s rights and interests get overlooked or put at the back of the queue. Enough already.

Posted in Uncategorized | Leave a comment

Kids can’t pay for the truth

In many countries advertising revenues were vital in helping keep “old-fashioned” newspapers and other types of journals alive, particularly smaller, local ones. Typically these would be in printed form but they all soon had an online counterpart.

In addition there was a vast array of smaller or specialist publications and magazines which, in varying degrees, also depended on advertising revenues.

The people employed to write for or edit the above, by and large, had learnt the trade of journalism. The importance of checking facts was dinged into them and they were bound by a code of professional ethics, reinforced by laws about liabilities.

Of course there were failures, sometimes spectacular ones, and there were always issues around how to select, interpret and present “facts”.  Typically, any bias correlated either with the individual author’s views or the owner’s interests. Minority opinions would often struggle to get an airing or a fair hearing.

What was NOT easy to find

Yet for all of its many and obvious failings, under the muddled ancien regime barefaced lies, straightforwardly insane or calculatedly manipulative explanations of  world events were NOT that easy to find, certainly not on any large scale, or via any easily accessible, readily available outlets. Self-correcting mechanisms were in place. You had to hunt for the dark side and that alone tended to keep the numbers and the level of interest down.

But look where we are now. Platforms which have starved journalism of an important part of its lifeblood, advertising revenues, have now become major promoters, conduits, providers, call it what you will, of the exact opposite of what good journalism is about. And societies all over the world are hurting because of it. In several ways.

If the internet was just a large seminar room

If the internet was just a large  University seminar room, none of this would matter, or at least not very much.

But the internet is not a seminar room. Misinformation spread to serve a specific project has huge real world effects and rarely are these pretty.  On the contrary they pose a direct threat to liberal values and democratic institutions. Global warming deniers and anti-vaxxers threaten human life itself.

Nobody should refuse to take sides

Nobody should refuse to take sides in this debate, particularly if our children risk being gulled into becoming pawns or spear carriers for incendiary, hate-filled rabble rousers  carried along by destructive ignorance.

Specifically, tech companies’ pervasiveness in the modern world means they cannot claim to be innocent ingénues, bystanders with minimal or no interest in the outcome.

Myopic Utopianism is not the answer

Saying the answer to bad speech is more speech is the kind of myopic Utopianism that was partly responsible for getting us into this mess in the first place. The answer to bad speech is don’t give it a megaphone. Apologising afterwards just won’t do.

It’s easy to state the problem. Not so easy to come up with solutions if your company’s income depends, not upon the truth or any recognisable version of it, rather it depends on something other than truth.

The Mel Gibson School of Philosophy

Silicon Valley pulled off a remarkable trick when they managed to convince so many of us that the absence of regulation was a synonym for  “freedom” therefore any attempt to regulate them was an attack on “freedom”.  I think Mel Gibson must have been their philosophical reference point. “Freedom” in this case was really a synonym for the ability to make money. In that respect they succeeded brilliantly.

Get “digitally literate”. Really? 

We are now being told to chill. Digital literacy is the answer.

Who could be against digital literacy? I’m not.  It should be encouraged to the greatest extent possible. But it is sort of dragging us back to the idea that the internet is a seminar room. If we are all just well educated enough virtue will triumph, evil will fail. Er, no.

The digital literacy schtick shifts the responsibility back to us to get ourselves  up to speed so as to negate or nullify the very things the platforms are doing.

For adults there is a stronger case for this. But for children?

Or pay for quality journalism

Alternatively we are told to chill for a different reason.

Good journalism is not dead. You just have to pay. Where does that leave kids and the poor? Some of the subscriptions are substantial. I know. I have several.

Countries which have public service broadcasters not dependent on advertising revenues e.g. the BBC in the UK,  are very fortunate but money is tight and  they are under constant attack from commercial interests who would like to see them dead and buried or at any rate reduced in size and reach.

Public service and other broadcasters and publishers are having to compete against a variety of platforms not bound by their code of ethics. These platforms are not even bound by the same laws. They enjoy massive immunities.

And worse, they think nothing of cannibalizing’s other people’s output, providing it for “free” while they, not the originator, pull in even more advertising dollars off the back of it, in turn making it harder… get the picture. This is one of the reasons why the authorities in Australia are trying to find a way to get the big platforms to pay.

Misinformation/disinformation/fake news is a child protection concern

The Online Harms legislation will begin in the UK Parliament soon (we hope). The EU’s Digital Services Act is beginning its journey through the EU institutions. This question of misinformation/disinformation is clearly going to be important to several interests.  Children’s organizations will be making the case that it is very much a child protection concern as well.

Posted in Default settings, E-commerce, Privacy, Regulation, Self-regulation | Leave a comment

Let’s not make TWO mistakes

Nobody had spotted it. The European Commission openly acknowledged an error had been made. If left uncorrected it would bring to an end measures which have been protecting kids since 2009.

On 10th September the Commission published a proposal.  It describes the problem and, pending the development of a permanent or longer term solution, suggests the status quo is preserved at least until 2025. Phew! That’ll do. Disaster averted.

What was the mistake?

If the mistake is not rectified, when the European Electronic Communications Code comes into effect on 20th December this year, it will become illegal for a range of online businesses operating within the EU to continue or begin using automated proactive tools to try to detect child grooming behaviour, use PhotoDNA or similar to identify hashes of known child sex abuse still images or videos, or use classifiers to spot images likely to contain child sex abuse material so they can be sent for human review.

How well have these sorts of tools been working up to now? Absolute proof is impossible but to get some insight just look at the last annual report of the USA’s hotline(NCMEC).

According to NCMEC, in 2019 16.9 million child sex abuse images were reported to them and deleted. 99% were discovered as a result of the use of the kind of tools that are now otherwise under threat.

Heavy weather

At the LIBE Committee earlier this week the Commission’s  remedial proposal ran into some heavy weather  (the relevant part of the video starts at 10.36).

Yet several of the points some of the MEPs made at the Committee meeting were perfectly reasonable. It is earnestly to be hoped Commission officials and MEPs can work something out that will also meet with the approval of the Council of Ministers.

A failure to put this right, now we can see it full on, would not be an oversight or a second mistake. It would be something far, far worse.

A ban on innovation to protect children?

If I have any criticism of the Commission’s proposal it is because it seems to be suggesting only child protection technologies currently in use and well-established will be covered and therefore allowed. Presumably somebody in Brussels is drawing up a list?

This looks perilously close to saying innovating to protect children is being made illegal. One imagines updates and fixes will be allowed so this could easily get extremely messy.

Would it not be better and simpler to describe technology neutral general principles governing the use of proactive, online child protection tools? Providing any new tools that might come along conform with those principles, they are in the clear.

Lack of trust and transparency

Obviously there is not a single MEP who wants to help sexual predators to groom children. Neither is any MEP unconcerned about the circulation of child sex abuse images.

Thus, the discontent being expressed at the LIBE Committee meeting principally was an echo of the lack of trust in tech companies.  This is something European institutions could and should have addressed before now, but the fact that they have not done so should not lead to children  having to pay the price.

One MEP mentioned the possibility that companies currently proactively looking for illegal images or grooming behaviour might be deliberately acquiring data to use for commercial purposes.

The fact is the Commission’s proposal expressly states such behaviour would be illegal, as it would also be under the GDPR, so once again we are back to the lack of trust which in turn is rooted in zero transparency.

This is  one of the key aspects of the reforms to internet regulation to be addressed in the planned Digital Services Act and explains why the Commission describes the 10th September proposal as being only for the interim.

Posted in Default settings, E-commerce, Facebook, Google, Regulation, Self-regulation | Leave a comment

Children’s groups speak out

The EU held a consultation on the upcoming Digital Services Act. It closed yesterday. Here is a link to the document I submitted with the support of one or more children’s groups from 15 Member States. What with the holiday period, Covid and the relatively short turnaround time, that’s not a bad showing. The processes that will now follow will likely carry on for some time and in the months (years?) ahead I hope we can build on that level of  engagement. It is vital that we do.

The year 2000 is ancient history

The decision-makers in Brussels-Strasbourg must understand that,  as compared with 2000 when they adopted the first set of ground rules for the internet, in the form of the e-Commerce Directive, the internet has changed almost beyond recognition.  Now one in five of all internet users in the EU is a child.

Children and families are therefore a major and persistent presence. They can no longer be treated as an irritating, trivial concern in a larger and more important or nobler struggle against, well, against all manner of societal and political evils. Children need to move from afterthought to always-thought in cyber policy making.

The five key recommendations

If you look at the document you will see it directs policy-makers attention to five major suggestions

  1. Establish a duty of care
  2. Create a meaningful, independent transparency regime
  3. Revisit the GDPR through the lens of children
  4. Closely scrutinise the operation of the AVMSD
  5. Improve the co-ordination and management of policy-making processes affecting kids

There is a separate paper, which was not submitted as part of the formal response. It acknowledges that the EU has been a major world leader in online child protection but it also details where it has not always got it right. I call it the “Consequences” document.

Posted in Age verification, Child abuse images, Consent, Default settings, E-commerce, Internet governance, Pornography, Privacy, Regulation, Self-regulation | Leave a comment

The EU’s Digital Services Act

On 8th September an EU consultation closes. It concerns a proposed new Digital Services Act (DSA). The Act will provide a once-in-a-generation opportunity to change the internet’s ground rules. Children’s advocates need to get busy.

Reforming the e-Commerce Directive

In the past twenty years or so the internet has changed almost beyond recognition. When the EU adopted the e-Commerce Directive in 2000 in many EU Member States children were a very small proportion of internet and mobile or smartphone users. Social media sites and services barely existed. Apps as we now know them were some way off. Tablets  you got from the doctor.

In 2000 the technology was still relatively new and poorly understood outside a narrow circle. Business asked Governments to “stay out of the way and let us innovate”. They got their wish. If problems arose, tech companies assured everyone they would “do the right thing”. This was called “self-regulation”.

It hasn’t worked. Or rather, its successes have been far too limited and inconsistent. By giving online businesses an almost unique form of legal protection, the e-Commerce Directive created a perverse incentive to do nothing. Many did exactly that. Nothing, or not enough.  Every tech company says they take children’s rights, including children’s safety, “very seriously”. Yet look where we are.

Today in the EU 90 million internet users are children. That is one in five of all users. Families and children are a major and persistent presence in the world of digital technology.  Whatever else it might be, in 21st Century Europe the internet is a consumer product. The internet and its associated access devices must start to comply with standards commonly found in the consumer space.

Need to press the reset button

The terrible things that have happened online to far too many children are not an unavoidable price which has to be paid in perpetuity so as to continue enjoying the many benefits of the internet. But to change the paradigm requires a major act of political will. The EU needs to press the reset button. The e-Commerce Directive is in need of a major overhaul. That’s what the DSA will do.

Beware the Brussels-Strasbourg cocktails-and-lobbying circuit

In any lobbying or campaigning work any of us might do as the DSA processes evolve – and we are probably talking years – it is impossible to over emphasise the importance of not falling into the trap of thinking everything will be settled by Commission officials and, assuming it ever gets going again, the Brussels-Strasbourg cocktails-and-lobbying circuit.

A major part of the  decision-making machinery is the Council of Ministers.  This consists of Ministers from each country, typically supported by civil servants  and advisers in their own national capital and their permanent delegation in Brussels.

On matters such as these it is vital each of these elements and MEPs know how strongly people feel “back home”.

Briefing document heading your way

I have prepared a (short) briefing document which sets out my own views on the key strategic reforms that are needed. Although I wrote it, it is the product of many discussions with experts from several different disciplines and geographies.

The briefing paper is linked to another (slightly longer) document which acknowledges while the EU has been a  major world leader in many areas connected with children’s safety and children’s rights online (and being held safe is a major right) there have also been some spectacular failures that need to be corrected. Now is the time.

Watch out for these documents in your inboxes in the coming days. Use, adapt or ignore  them as you like in any campaigning you undertake. Hopefully we can join together in some way to make our collective voice louder.

The signs are good

In the past couple of months we have seen Vice President Šuica’s initiative, “Delivering for children: an EU strategy on the rights of the child” and Commissioner Johansson’s Communication on a “Strategy to combat child sexual abuse and exploitation” with its major emphasis on the position of victims of sexual abuse, online and off. There has also been a Communication  on areas of the GDPR that need another look in relation to matters affecting children. Many pieces of the jigsaw are coming together about now.

Add that to the fact almost every major tech company accepts reform of the rules is required, and you can see why I am feeling optimistic. But not naively so.

Optimism can be the graveyard of fools

For all the fine words we are hearing ahead of the match, we have to expect two things.

Whatever  large or small tech companies say in public about how much they recognise the need for a new regulatory framework, when it comes down to the nitty-gritty detail don’t expect their views and ours to be same.  We will not all be holding hands and singing in harmony from the same hymn sheet. That will put a strain on some of the vaunted “partnerships” that exist.

Then there’s the usual suspects in civil society. Many are not quite as starry-eyed as they once were about the, as they saw it, “freedom-loving, insurgent Mother Theresa goodness of Silicon Valley” but we know from bitter experience they will generally find a reason to put children’s interests, children’s rights, lower down the list.


Posted in Default settings, Internet governance, Regulation, Self-regulation | Leave a comment

Beware the harmful algorithm

These past few days British media outlets have been full of stories about the scandalous way 17 and 18 year olds have been dealt with following the cancellation of ‘A’ Level exams because of the virus.  At the root of the problem was an algorithm. Or rather, it was probably not the algorithm that was the problem, as such,  but how it was applied.

‘A’ Level results essentially determine which University or other educational or training opportunity you end up with as your life journey moves to another level at the end of your time at school. It is usually a pivotal moment in a young person’s life. Not necessarily decisive, but hugely important.

Teachers’ predictions

For readers outside the UK:  this year every young person’s teachers were asked to predict the results they would have obtained had they sat  ‘A’ Levels.  This happens every year so it is possible to compare predictions with actual outcomes.  A great many children perform exactly in line with predictions. A great many do not. That gap is important because it is populated  by people, not robots.

In the usual way, on the basis of teachers’ predictions University places and the like were offered, or not.

Devising an algorithm 

In the absence of actual exams how were the authorities to determine the final results? The answer they all came up with was to devise an algorithm then apply it to the mass of data showing the predicted grades.

The process of devising the algorithm appears to have started by looking at historic data for the subjects concerned, and the type of school concerned, for which read the demography of its intake.

These data would show that in school A, in a certain kind of area x% of  students could be expected to receive top level grades in Maths,  y%  would get the lowest grades in History and so on, subject by subject, school by school.

Class size matters

One of the other factors in the equation was class size, which is typically a proxy for parental income. Children in smaller classes tend to do better than children in larger classes. Who knew? “Smaller classes”  is often just another way of saying “private school”  or a school in a prosperous part of town. Which tends to get us back to parental income.

According to this way of looking at the matter, year after year a predictable proportion of young people will get a certain spread of grades and therefore end up going to a certain spread of Universities or whatever.

But here’s the kicker. Teachers’ predictions for each child were the very last factor to be entered into the calculations. And it looks like they counted for a lot less than the “framework” established by the historic data.

“A process” became a cruel farce.

Marked down

The process has had terrible consequences for huge numbers of children. Why? Because in making the end result fit the pattern of previous years,  large numbers of youngsters were marked down from their teachers’ predictions.

In defence of the system, some officials tried to argue that because teachers from certain types of schools, seemingly with equally certain mathematical predictability, are more prone to overestimate a child’s performance than teachers from other schools, the  large scale marking down was justified.

Dazzled by the maths. Lost sight of the young human being

If you happened to be in an improving school about to register its breakthrough moment, well that’s just bad luck according to the faceless ones who gave their blessing to all this. In  allowing themselves to be dazzled by the logic of the maths they lost sight of the humanity required when handling young people’s dreams.

I listened to two distinguished statisticians explain that “Algorithms work extremely well for populations, but not necessarily for individuals.” They said this quite dispassionately and not in any way to justify what happened this year.

A system that only works for populations and allows for substantial injustice to be suffered by an individual is a system not worthy of the name. Young people can improve their individual performance between mocks and finals. Poor mock results are often the spur to pull your finger out. That’s not something a blunt instrument like an algorithm can detect. With young people’s lives we need precision lasers not hammers.

With a shrug of the shoulders a bureaucrat cannot be allowed to sweep aside and crush a young person. I am tempted to invoke historical examples of other forms of brutal indifference to the individual in the interest of “the plan” but the point is obvious to anyone with a brain and a heart.

Maths first. People nowhere. Not acceptable.

Worcester College, Oxford shows the way

Starting with Worcester College, Oxford some Universities have declared they will accept the teachers’ original predictions and ignore any marking down. I applaud them. How  easy it will be for others to follow suit I don’t know, but they should  all certainly make the effort and stand ready to explain why they didn’t.

GDPR no help but…

Article 22 of the GDPR says individuals shall have the right  “not to be subject to a decision based solely on automated processing, including profiling……”

That is  pretty close to what has happened here although because it isn’t exactly the same, Article 22 is of no use (the process was not based ‘solely’ on automated processing).

Yet the spirit of Article 22 is clear. There is an appeals process but it costs money and it may not be possible to complete the vast numbers of appeals now anticipated before the University  and other terms begin.

With the inevitable drop in students from overseas (maybe even all Chinese students) why can’t everyone do what Worcester College did?  In the next two to three years perhaps there will be a higher drop out rate but at least each student would know this was down to them, not an invisible, unaccountable hand in Whitehall, Holyrood, Belfast or Cardiff. And set against that there will some students who get in to University or other place who otherwise might not have done. And they will shine.

We should not have to choose between injustices

One last word: I have heard people complain that children from private schools and more affluent parts of town fared less badly than children from elsewhere. That is the inevitable result of how algorithms work. If there is bias in a system it will be reflected in the data and be reproduced by it.

This is not a reason to punish the children of well off families. The individual is what matters here, not their parentage. Insisting that little Billy Rich or Jenny Affluent does not get the University place he or she dreamed of because Frankie Poor and Lucy Skint from down the road didn’t get theirs is the wrong answer to the wrong question.

There are lessons here for all of us as algorithms seem set to play an ever more important role in the way all kinds of things work. Particularly over the internet.

Posted in Privacy, Regulation, Self-regulation, Uncategorized | Leave a comment