Analogue rules in a digital world

Phillippe Sands is a distinguished human rights lawyer who, among many other things, wrote “East West Street”. In it he traces expressions of the kind of philosophical thinking which would later be embodied in what we now call human rights law. His conclusion is unequivocal and from my researches it seems almost every other writer on the subject agrees. Modern human rights law emerged from the shadows of World War Two and in particular the Nuremberg Trials. The major trials ended in November 1946.

In December 1948 in Paris the then recently formed United Nations adopted the Universal Declaration of Human Rights  (Declaration). This became the seminal human rights code. In 30 tightly written clauses inalienable rights were proclaimed, henceforth to be enjoyed by everyone. In effect, simultaneously, the Declaration therefore defined a series of limits in respect of what states can and cannot do vis-a-vis individuals, as well as telling states basic things they should do. The Declaration has shaped or influenced every legal instrument which followed where that instrument sought to address similar issues.

The Declaration was very much a product of its time and that time was analogue. It is wholly implausible to suppose the men and women who produced the Declaration could have imagined a world like today’s, where digital technologies in general and the internet in particular have so radically affected the way we live, the way society operates.

Unquestionably, they could have had no idea how children’s lives would become entwined and be impacted. Is it idle to speculate about what they might have said if they had had even an inkling? Probably. But it would not be hard to guess, just as, I fondly believe, the Founders of the United States might have taken more care with the language and punctuation they used when writing the Constitution if they had even the slighest notion the massive fire power of  assault rifles might be so easily obtainable and used to kill children in cold blood at school.

Prelapsarian innocence is also evident in the European Convention on Human Rights, (ECHR) 1950, and the UN Convention on the Rights of the Child, 1989. The EU’s Charter of Fundamental Rights, 2000, is in the same line of development. It too arrived on the scene before truly mass engagement with digital technologies emerged, before social media appeared on the scene.

Grand visions

I get that these grand documents espouse even grander themes, general principles, but it now seems pretty clear to me that, while talking up the technical complexity, key industry players donned the clothes of liberty and progress, citing Conventions and Treaties drawn from another time but the essence of their project was, or soon became, to build massively profitable enterprises. Who could be against liberty and progress?  Only fools and tyrants. Did they really understand the fearsome complexity of it all? Governments backed off, giving Silicon Valley pretty much a free hand. Hello self-regulation. Hello disappointment.

So a question we have to face now is whether or to what extent we are willing to allow the unknowing language and thinking of another age to lock us in, prevent us from addressing previously unforeseen or poorly understood challenges. In that connection I was especially taken by an extract from a case heard by the European Court of Human Rights where it was held, in relation to the ECHR, the Convention must be “practical and effective and not theoretical and illusory”. Isn’t that what we all hope for and expect from our laws?

In the case of the Convention on the Rights of the Child, the distance between the world in which it was written and the world inhabited by 21st Century children was so great, and the need to make it more practical and effective so urgent and pressing, the UN Committee on the Rights of the Child commissioned the preparation of a General Comment. The purpose of the General Comment on children’s rights in relation to the digital environment was to explain how

States parties should implement the Convention in relation to the digital environment

In addition it would provide

…guidance on relevant legislative, policy and other measures to ensure full compliance with their obligations under the Convention and the Optional Protocols thereto in the light of the opportunities, risks and challenges in promoting, respecting, protecting and fulfilling all children’s rights in the digital environment.

The General Comment does not have legal force but it constitutes authoritative guidance on how to read the original intent of the Convention. Any legal instrument which ties itself to the Convention on the Rights of the Child can now only be properly understood through the lens of General Comment 25.

An early encounter with industry tone-deafness or was it arrogance?

From relatively early, industry’s determination to press on with developing their business model could result in stark examples of tone-deafness, or was it arrogance? A drunken sense of invincibility? One of the first illustrations of this that I recall which impacted the mass market, arose with the launching of Street View by Google in the UK. This would have been around 2009, when Google was in its aggressive pomp.

I was contacted by a group of parents. They were furious because, courtesy of Street View, images of their children were on display for the world to see, linked to a building that was obviously a school, their school. Nobody had asked these parents or the school how they felt about that. It just happened.

With a single click, courtesy of Google Maps, anyone could now see the school abutted a park area with several pathways leading to a large wood. You didn’t have to be completely paranoid to work out what thoughts were running through parents’ minds and they certainly left me in no doubt about what those thoughts were.

When I contacted Google their slightly embarrassed member of staff pointed out that all of the information now being displayed was already in the public domain. There was a school magazine with similar pictures, including more detailed ones showing even more children  and with their names linked to them.  It circulated widely in the town, could be bought in shops. Likewise anyone could buy an Ordinance Survey map or examine one in the local library or Town Hall. Then they might see even more clearly how all the pathways in that area led into and around the woods by the school.

What this story illustrates rather well is how practices which would have gone unremarked, were considered boringly normal or everyday in an analogue world, can suddenly acquire a whole new potency or significance when they shift into the digital. It changes everything when you make something easily available on a mass scale in a large, fast-moving, multi-jurisdictional environment such as the internet.

Behaviours have since shifted to accommodate the fact that schools appear on maps that can be viewed online as has, in many places, the practice of allowing children’s photos to appear online, either at all or linked to any kind of identifiable data. But the point is Google made the decision unilaterally and pressed ahead regardless. Soon other interests piled in pointing out a variety of risks which arose from Street View. Google walked back but this was a vibrant sign of things to come. You might say it was a manifestation of “moving fast and breaking things”,  if that is not a blasphemous mixed metaphor.

End-to-end encryption and child sex abuse material

It is impossible to believe, when establishing a right to privacy or, later, when establishing rights in respect of one’s data, the authors of any legal instrument adopted hitherto ever contemplated the possibility of the emergence of massively and easily available systems which could, in effect, create spaces where criminal gangs could safely organize the rape of children.  Spaces which would facilitate the mass distribution of child sexual abuse material. Spaces where a child’s right to privacy and human dignity, to the integrity of their own bodies and a healthy life, could be ignored.

If there is no hierarchy of rights how is it that an alleged right to privacy can, in practice, trump all others by erecting barriers and obstacles which render other rights redundant or entirely theoretical?

Yet privacy respecting tools exist

Here we get into an absurd bind. There is no doubt scalable technical tools exist which can, with an exceptionally high degree of accuracy, pick up signs of criminal behaviour e.g. the presence of child sex abuse material. They work on the basis of pattern recognition, similar to anti-virus and other defensive software. The programmes I am thinking of pick up nothing else but signs of child sexul abuse material and if, upon examination, it appears no crime has been committed no further action is taken. Nobody’s name is besmirched. No time is wasted in a court.

But still the tools are rejected by some, using one or both of two entirely bogus arguments.

They say they don’t believe the tools only pick up evidence of criminal behaviour, implying other data are also collected and then used for undeclared purposes, maybe to the commercial advantage of whoever collected it. Alternatively they say the tools could be deployed for other purposes altogether.

Note these are not arguments which suggest the tools cannot do what is claimed of them. These are arguments which suggest they are doing or could do more or different things. In other words these are arguments about transparency, supervision and management. This must be fixable and here the children’s lobby and the privacy lobby need not be at odds in any way. Nobody I know wants child protection tools to be allowed to be used for any purpose other than protecting children. On the contrary. Anything which undermines confidence in the online child protecion project is very much our enemy.

The second argument you sometimes hear is about only acting on the basis of “individual suspicion”.  So a company or a police agency can and should only investigate a particular person or file if they have  by other means  already collected evidence which suggests  that person or file is linked to criminal activity.

It is an argument against using automated tools to detect behaviour which threatens  children. It is an argument which insists on importing old school, analogue, methods into the digital environment. I wonder how many additional police officers we might need to recruit? It is an argument for doing nothing that will work. There is another word for an argument like that but in mixed company I will refrain from using it.

 

 

 

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International, Technical Adviser to the European NGO Alliance for Child Safety Online, which is administered by Save the Children Italy and an Advisory Council Member of Beyond Borders (Canada). Amongst other things John is or has been an Adviser to the United Nations, ITU, the European Union, the Council of Europe and European Union Agency for Network and Information Security and is a former Executive Board Member of the UK Council for Child Internet Safety. He is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John has advised many of the world's largest internet companies on online child safety. In June, 2012, John was appointed a Visiting Senior Fellow at the London School of Economics and Political Science. This was renewed in 2018. More: http://johncarrcv.blogspot.com
This entry was posted in Child abuse images, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized. Bookmark the permalink.