Yesterday’s Daily Telegraph carried a great piece in which Rachel de Souza, England’s new Children’s Commissioner, makes clear her ambitions for age verification in general but in particular in respect of pornography sites. She obviously believes introducing age verification to protect children from porn sites is an urgent priority and worries that the provisions of the Online Safety Bill (OSB) as currently drafted are not strong enough.
de Souza is right and will find a great deal of support from across a wide range of children’s organizations, women’s organizations and many other civil society bodies.
More media attention
This morning I was interviewed on national radio about the Children’s Commissioner’s article which also seems to have prompted a leader in today’s Times where they said
As almost everyone acknowledges, it is beyond time for tougher laws to protect children from harmful, abusive and pornographic social media.
Nevertheless it goes on to note
…there is considerable disquiet among MPs. This is not because anyone opposes…. protections; it is because there is no compulsion on media companies to enact (them).
The “considerable disquiet” is being expressed, for example by Damian Collins MP, who is Chair of the Pre-Legislative Scrutiny Joint Committee on the OSB. Collins says
We need to look at the role robust age-verification can play.
Hear hear again.
Age verification is not a panacea, not a silver bullet, but it is a bullet. It has worked outstandingly well in respect of keeping children away from all the traditional forms of online gambling. There is absolutely no doubt it could do the same elsewhere, including protecting children from porn and other online harms where there is a legally defined or contractually prescribed minimum age.
It is a counsel of despair to say you accept that age verification is a good thing but you won’t agree to its introduction anyway because you are worried about how it might be misused. The challenge is to devise governance or supervision mechanisms which ensure as far as is humanly possible that there is no misuse. We all need to be confident it is only doing what it says on the tin. If we do not think we can create such systems of governance or supervision then the future is indeed grim for everyone but the uber geeks.
The end of internet exceptionalism
What I think is really going on is, as the internet has become more and more integrated into all aspects of the nation’s political life, social life, family life and children’s lives we are seeing the end of any notion of internet exceptionalism.
Or rather more and more of us are no longer willing to accept the wildness of the early years. More and more of us are insisting that the internet and all its works have to be much more closely aligned with our expectations in respect of other types of media and communication tools which now move among us.
An intensely political period beckons
We all need to be clear. We are entering a period of intense political activity. The opponents of age verification for porn sites are the usual suspects, the ones we only normally hear about in children’s debates when they are pointing out why something should not be done, when they are opposing this or that new idea or suggestion. They have a locker full of alibis for inaction. Innovation is cool everywhere but not here.
Let’s get rid of the most absurd argument right away
Looking ahead to any battles there might be around online porn and children’s access I really do not want to hear anyone say Pornhub or similar can play or has played any kind of useful or positive role in the sexual education or relationship counselling of children.
The fact that some kids may have said they are cool with Pornhub (false bravado aside) or they say they found it “helpful” in some way, emphatically does not give anyone a licence, much less a mandate, to continue with or to tolerate the status quo.
To the extent that such sites might (and it’s a mighty, doubting might) have been “helpful” or informative to anyone in the past it is only a reflection of historic failings and lack of any better alternatives. No way is it an endorsement or a thank you to Pornhub.
Pornhub was never designed or intended to be an aid to children. It was designed and intended to make money by providing easy access to graphic forms of sexual imagery for the purpose of promoting sexual stimulation. Education or relationship counselling was not on the list. Children do not have a right to Pornhub. Children have a right to good sex education and states have an obligation to provide it.
Being young is about being a rebel
Children say they are in favour of loads of things their parents or the law forbid them or say are bad for them. It doesn’t make the children right or their parents or the law wrong.
The doctrine of “evolving capacities” can hit up against any number of brick walls and this is one of them. Do we really want porn sites making individual assessments of whether or not this specific 17 year old or that 15 year old actually would be “cool” and unharmed, even helped, by showing them some or all of their wares? The idea is absurd.
We are drawing to the close of an era
Somewhat prosaically we are drawing to a close an argument that began at least as far back as the Gambling Act 2005 when, for the first time anywhere in the world, a requirement was introduced to require online companies to introduce robust age verification. Online age verification is no longer a wild and whacky idea. It has moved to the mainstream. The technology required is trivial. The will to use it on a wider scale has been missing. We are going to fix that.
But at a deeper level, to return to an earlier theme, in the UK and elsewhere in the liberal democracies, we are starting to see the internet first and foremost as a consumer product which, for all its many valuable features, must at its core behave as if it was fit for the consumer space.
Maybe some of the magic or the glitter of the internet will fade. I have a twinge of nostalgia for the excitement of the early days but the world has moved on and the internet cannot stand outside of it, frozen in virtual aspic.
Today the Financial Times has published a letter by me in which I applaud Apple’s decision concerning its plans to limit the possibility of child sex abuse material being distributed via their devices or network. I also suggest it will force Facebook to reconsider its extremely bad intentions.
I am not sure of the etiquette or copyright position vis-a-vis the author in relation to a letter he has penned but below is the text of it anyway. If this blog suddenly disappears and you can’t get hold of me please come and visit me in jail. Bring grapes.
“In his article about Apple’s plans to introduce new child protection policies, Richard Waters suggests the way Apple went about it had “cut short debate” about the potential impact of their planned measures (Opinion, August 10).
Specifically Waters refers to Apple’s plan to inspect content on users’ devices before it is uploaded and placed into a strongly encrypted environment such as iCloud. Apple is going to do this in order to ensure the company is not aiding and abetting the distribution of child sexual abuse material.
Sadly the “debate” has been going for at least five years and for the greater part of that time it has been completely frozen. Things intensified when, in March 2019, Facebook announced it (was) going to do the exact opposite of what Apple is now proposing. That too was a unilateral decision, made all the worse because, unlike with Apple, it was against a well-documented background of Facebook already knowing that its currently unencrypted Messenger and Instagram Direct platforms were being massively exploited for criminal purposes.
In 2020 there were 20,307,216 reports to the US authorities of child sexual abuse material which had been exchanged over either Messenger or Instagram, but Facebook has so far given no sign that it will row back.
The argument is, I’m afraid, a binary one. Once material is strongly encrypted it becomes invisible to law enforcement, the courts and the company itself. So either you are willing to live with that or you are not. Facebook is. Apple isn’t.
However, I suspect Apple’s decision will force Facebook and others to reconsider. There are scalable solutions available which can respect user privacy while at the same time bearing down against at least certain types of criminal behaviour, in this case terrible crimes which harm children.
If people believe Apple or indeed malevolent governments could misuse the technology, that is an important but different point which speaks to how we regulate or supervise the internet. It is emphatically not an argument which allows companies to continue doing nothing to curb illegality where technology exists which allows them to do so. Apple should be applauded. It has not just moved the needle, it has given it an enormous and wholly beneficial shove.”
There has been great rejoicing at ECPAT International’s global HQ in Bangkok. Last week the work we have been doing with our partners around strong encryption received an enormous boost when Apple made a hugely important announcement about their plans to keep children safe online. Not everyone likes it but we love it.
The cat is out of the bag
The cat is now very definitely out of the bag. Apple has confirmed a core contention advanced by ECPAT. There are scalable solutions available which do not break encryption, which respect user privacy while at the same time significantly bearing down on certain types of criminal behaviour, in this case terrible crimes which harm children.
If people believe Apple or malevolent Governments could misuse the technology, that is an extremely important point but it is a different one. It speaks to how we regulate or supervise the internet. It is emphatically not an argument which allows companies to continue doing nothing to curb illegality where technology exists which enables them so to do. Equally it is not an argument for Apple to “uninvent” what it has already invented.
What it is is an argument for Governments and legislatures to catch up. Quickly.
In the world of tech, alibis for inaction are always thick on the ground. Apple should be applauded. They have not just tinkered with the needle they have given it an enormous and wholly beneficial shove. The company has not moved fast and broken things. It has taken its time and fixed them.
So what is Apple planning to do?
Apple’s announcement contained three elements. Later this year, in the next version of their operating system, first in the USA then country-by-country they will:
Limit users’ ability to locate child sexual abuse material (csam) and warn about online environments which are unsafe for children.
Introduce new tools to help parents help their children stay safe in relation to online communications, in particular warning about sensitive content which may be about to be sent or has been received.
Enable the detection of csam on individual devices before the image enters an encrypted environment. This will make it impossible for the user to upload csam or distribute it further in any other way.
Number three is what has prompted the greatest outcry.
A game changer
The mere fact a company like Apple has acknowledged they have a responsibility to act in this area, and have come up with a scalable solution, fundamentally changes the nature of the debate. Now we know something can be done, the “it’s not possible” position has been vanquished. Any online business which refuses to change its ways likely will find itself on the wrong side of public opinion and, probably, the law as legislators around the world will now feel emboldened to act to compel firms to do what Apple has voluntarily chosen to do.
And all the angst?
Several commentators who otherwise appeared to express sympathy for Apple’s stated objectives nevertheless couldn’t quite resist trying to take the shine off the company’s coup de théâtre by complaining about the way they did it.
However, in 2019, Facebook’s unilateral announcement that it intended to do the exact opposite of what Apple is now proposing suggests the possibility of reaching an industry consensus was wholly illusory.
I am sure many “i’s” need to be dotted, many “t’s” need to be crossed, but sometimes I feel when it comes to protecting children everything has to be flawless out of the traps. It is OK for Big Tech to get it wrong everywhere else and fix things later, or not, but that cannot be allowed to happen in this department. It is OK to innovate madly, but not here. We are judged by a different standard.
Don’t get me wrong. I am not in favour of imperfection. I do not applaud innovation or recklessness that pays no heed to the downside.
The simple truth, though, is this whole business has been and is about values and priorities. It is binary. Either you think steps should be taken to minimise risks to children before content is encrypted or you don’t. There is no middle way because when the content is encrypted the content is invisible forever. The bad guys win. Apple has shown how they lose.
Encryption is not broken. No new data is being collected or exploited
In a further statement issued by Apple yesterday they make it abundantly clear and underline that they are not breaking any kind of encryption. They also make it clear their technology is limited in scope and they will not use it for any other purpose.
If you don’t believe that we are back to the point I made earlier. Let’s discuss that but whatever the outcome of the discussion might turn out to be Apple must be allowed and encouraged to carry on. I eagerly wait to hear other companies pledging to follow in their footsteps. Soon.
OK. I am going to shout it out loud, or rather I am going to put it in writing, in public, which is sort of the same thing.
There is a great deal in the UK’s Online Safety Bill (OSB) I like. A lot. Stuff we have been campaigning for over many years. However, it is also clear “le diable sera dans le détail”or, as in this case, “les codes de pratique et règlements.”
If you don’t mind being accused of being, er, a poseur, if you are going to say something utterly banal it probably helps to say it in a foreign language. It suggests this is no ordinary, banal banality.
In other words, on top of what appears on the face of the Bill, the success of the OSB in no small measure is going to be determined by a whole series of codes of practice and regulations which Ofcom and the Secretary of State will draw up. Remember “whole series”. I will return to it. But first:
Clause 36 (3)
Clause 36 (3) of the OSB tells us why, in particular, the codes of practice matter:
“A provider… is to be treated as complying with [the] safety duties for services likely to be accessed by children…if the provider takes the steps described in a code of practice…”
The OSB says similar things in respect of other codes that will be published on reporting, record-keeping and transparency duties, terrorist content, legal but harmful content, and the like. Codes of practice and regulations are going to carry a heavy burden. For now I will focus on children-related dimensions.
Thus, in terms of legal compliance and liability it seems if platforms do what the codes prescribe they will retain the same broad legal immunity which up to now has protected all intermediaries, irrespective of their size. The OSB does not expressly say that but broad immunity is an established part of the background radiation (the eCommerce Directive?) so at least one eminent lawyer believes that to be the case.
I have no quarrel with that. In my view, if a platform meets the terms of the OSB, the codes and regulations, they are entitled to retain broad immunity in relation to items posted by third parties where, prior to notification or discovery, they had no knowledge.
After all, the codes will be detailed and will decisively shape the behaviour of intermediaries. Turning to child sexual abuse material, for example, there is no doubt or ambiguity in relation to precisely what is expected of platforms (see below).
The logic of the codes of practice
And if an intermediary does not follow the codes, regulations or the terms expressly stated in the OSB? What then?
There will be a system of fines and other penalties. These are set out in the OSB or will be in what follows. However, the likely effectiveness of these fines and penalties are being argued about, not least because of doubts about Ofcom’s ability or inclination to mount and sustain an enforcement regime on the scale required.
The risk is obvious. If platforms conclude Ofcom is a paper tiger or is so overstretched they have little to fear any time soon we will have failed.
Platforms must believe there is a serious risk they could be turned over, held accountable, and not in the far distant future.
Ofcom needs an ally. Children need an insurance policy. I have one.
No compliance? Lose the relevant immunity.
Thus, for the avoidance of doubt, somewhere in the OSB it should be made explicit that where a platform governed by a code of practice or other regulations fails to honour the terms, not only could it become subject to the penalties the OSB will usher in, it will also forfeit any and all criminal and civil immunities from which it would otherwise have benefitted.
To be clear: I am not suggesting if platforms fail to honour the terms of a code or regulations they forfeit all immunities in respect of everything they do. That would be unreasonable.
But where a reasonably foreseeable actual harm has resulted or is alleged to have resulted from a failure to implement the terms of a code or regulations then whoever can be said to have been injured as a result should be free to bring an action which would previously have been barred or would have failed because of the immunity. The immunity is therefore lost only insofar as it concerns and is limited to the reasonably forseeable harm suffered by an identifiable individual or group.
Something like this would focus the minds of every Director or senior manager of every platform and would relieve Ofcom of a great deal of the responsibility for ensuring online businesses are routinely following the law rather than just hoping they never get caught or inspected or if they are it will be some time hence when today’s culprits might have already vanished with the loot.
“Whole series”. Big burden
It is apparent we will soon be seeing a raft of draft codes of practice which Ofcom has to prepare. Doubtless there will also be drafts issued by the Secretary of State in relation to his powers and obligations.
No problem. In principle. But…..how will things work in practice?
A vast army of in-house and trade association lawyers and many lawyers in firms hired to supplement them are going to be able to buy their second or third yachts off the back of the work on the consultation and implementation of these codes and related regulations. Some of the preparatory analysis will already have happened and be feeding into Big Tech’s extremely well-funded lobbying strategies.
So how is civil society’s voice going to be heard? I know of no children’s organization in the UK which has the capacity to engage with these processes to anything like the degree that is going to be required or for the period of time entailed.
Every children’s charity is strapped for cash. A great many Charitable Foundations that sometimes step into the breach similarly are having a hard time. Yet if the proposed new regime is to work to best effect and in the way the Government intends, Ofcom or someone other than Big Tech needs to provide some cash.
I am not suggesting we can ever achieve a level-playing field as between children’s organizations, the civil service and Big Tech but something must be done to ensure the tables are not so vertinginously tipped against children’s interests being represented. The processes which lie ahead are going to require a sustained level of detailed engagement.
If there already was an industry levy which Ofcom administered that would be the obvious solution. But there isn’t so maybe as the OSB progresses through Parliament the Government can address this vital question.
General monitoring? No.
One of the supposedly sacrosanct articles of faith of internet governance hitherto has been that intermediaries should be under no obligation to undertake “general monitoring”. It first appeared in the USA courtesy of s230 of the CDA. We copied it in the eCommerce Directive of 2000. It lay at the root of much that later went wrong for children online albeit it took some time for us all to realise it. However, once we did realise it there was no excuse for sticking with it. Yet that is precisely what the EU appears intent on doing.
In the EU’s draft proposal for a new Digital Services Act (DSA) the immunity provisions are repeated eight times e.g. as here on page 13.
“The proposed legislation will preserve the prohibition of general monitoring obligations of the e-Commerce Directive, which in itself is crucial to the required fair balance of fundamental rights in the online world.”
It is then further elaborated and developed in Article 7
No general monitoring or active fact-finding obligations
No general obligation to monitor the information which providers of intermediary services transmit or store, nor actively to seek facts or circumstances indicating illegal activity shall be imposed on those providers. (emphasis added)
The bit in bold is a remarkable thing for any organization to say if it also wants to claim it is concerned with upholding the rule of law. I paraphrase:
“Dudes. Chill. You don’t have to try and find out if any criminals are using your facilities to abuse children. Nah. Spend more time on the beach. Or innovating. Your choice. No pressure.”
The UK is going its own and better way
I am very pleased to say the UK’s OSB does not repeat the archaic and ridiculous formula of the EU’s proposed Article 7.
But make no mistake, neither does the UK impose a ” general monitoring” duty. It solves the problem in a different way by imposing quite specific, targetted objectives and requirements.
Here’s an example. Clause 21 (2) of the OSB, sets out the duties which all platforms have in respect of illegal content, of which child sexual exploitation and abuse (CSEA) is a priority category. Providers must take
“proportionate steps to mitigate and effectively manage the risks of harm to individuals, as identified in the……illegal content risk assessment.”
In 21(3) in respect of providers of search facilities, the wording is even more explicit. They have a duty to:
“minimise the risk of individuals encounteringpriority illegal content.”
Is that an instruction to engage in general monitoring? No it is not. It is an instruction to use available, reliable and privacy-respecting technical tools to detect known illegal content.
The Advisory Board of euConsent held its first meeting last week.
euConsent aims to deliver a framework of standards which will encourage the development of a pan-European network of providers of online age verification and parental consent mechanisms.
Hugely important – global significance
It is hard to overstate the impact this project could have on the way the internet is used not just in Europe but potentially around the world. Many honest efforts by Regulators to protect children online have found it difficult to solve several key challenges which are rooted in the transnational nature of the medium. One of the most obvious and pressing concerned age verification. The European Commission recognised a pan-European solution is required. They ran a competition to select a team to tackle the problem. euConsent was the result.
Highest levels of data security
With euConsent a solution is in sight. Users will be able to verify their age or give consent for their children to use a site or service without disclosing their identity. All age verification providers who are part of the network will be independently audited to certified levels of assurance. Lawmakers, services, and Regulators can choose how and where the requirements will be applied. All providers will operate to the highest standards of data security.
If it is to be a success such an important project needs vigorous and rigorous scrutiny as it progresses through its different phases. An Advisory Board has been established and I agreed to be its Chair. The Board comprises representatives of a wide range of stakeholders: European regulatory authorities, children’s rights organizations, tech companies and politicians. We held our inaugural meeting last Friday.
The Board will hold the project team accountable, helping them as they establish the standards. The Board’s collective and individual insights will contribute to a system that is workable with existing technology and facilitates the creation and implementation of effective regulations. Any new technologies which may emerge will know what they must be able to do if they are to be recognised as an acceptable tool.
Our first meeting was very encouraging. The initial research phase of euConsent has been conducted by academics from Leiden University, Aston University and the London School of Economics and Political Science, supplemented by further work from the Age Verification Providers’ Association, and the research firm Revealing Reality. These groups presented their key findings to the Advisory Board who were impressed by the scope of what has been done so far. Board member Anna Morgan, Deputy Commissioner at the Irish Data Protection Commission, found the evidence-based foundations of the project really promising. Almudena Lara of Google was pleased the opinions of children themselves are being sought and listened to in the research conducted by Revealing Reality.
Having such a spread of experts all gathered in the same Zoom produced a series of lively interchanges which were immensely valuable! Even at this early stage some key issues were raised. Negotiating the tension between data privacy and child protection lies at the heart of what we are trying to do, and how to cope with the already existing different regulatory approaches across jurisdictions is no less important.
I am looking forward to engaging with the Advisory Board further as euConsent’s technical solutions are developed and released over the coming months.
The Council of Europe has brought out an extremely useful and timely report. Quoting directly from multiple sources it draws together many of the principal legal instruments which have a bearing on the obligations of states actively to protect children from sexual exploitation and abuse online.
The report does so within a very particular frame of reference. It examines the role automated technologies and tools can play in helping states discharge their obligations to children. Moreover, while the legal instruments cited apply directly to and are generally enforceable against states, it is clear by extension the legal principles ennuciated apply also to private entities e.g. companies which operate within or from signatory states.
The report was prompted by the debacle in the EU over the e-Privacy Directive. Readers will recall the central issue there was the legality of actions being taken by businesses which, on a wholly voluntary basis, had been deploying automated tools to protect children in respect of the distribution of child sexual abuse material (csam) and grooming.
The Council’s report (p11) describes the various types of technologies currently available to deliver either or both of those objectives.
With already known csam, online businesses claim to have been using one or more automated tools since at least 2009 (when Microsoft released PhotoDNA). Recently, we are told, automated tools have also been used to detect suspected grooming behaviour and to pick up images which are likely to be csam but have not yet been confirmed as being such by human moderators.
But were these automated tools and technologies legal to begin with? Would they still be legal after the European Electronic Communications Code (Code) came into effect? Would any potential legal difficulties be resolved by the interim derogation from the Code proposed by the Commission? Those were the questions which sparked controversy.
The States Parties to the Lanzarote Convention asked the Council of Europe to prepare a report which looked at the challenges raised in the debate.
As you would expect, and as you can see from the title shown above, the Council of Europe was concerned to ensure any and all actions taken in pursuit of the self-evidently desirable objective of protecting children nonetheless conformed with the Rule of Law, in this case human rights law.
The Council’s starting point is clear
On page 5 the following appears “States have a positive obligation to protect children from sexual abuse and exploitation”
For the avoidance of doubt the report then provided the following non-exhaustive list of legal instruments and references in support of its unambiguous, unequivocal statement.
“the UN Convention on the Rights of the Child and its Optional Protocol on the Sale of Children, Child Prostitution and Child Pornography;
the Council of Europe Convention on Human Rights, the European Social Charter and the Conventions on the protection of children against sexual exploitation and abuse, on Cybercrime and on Data protection (also known as Convention 108+);
the EU Directive 2002/58/EC of the European Parliament and of the Council (e-Privacy Directive) and the European Electronic Communications Code.
The relevance of the European jurisprudence is also highlighted through the analysis of the case law of the European Court of Human Rights and the European Union Court of Justice.”
I am not quite sure how this happened, it could simply be a matter of timing but, referring back to the point about businesses, what was not cited in the report was paragraph 35 of General Comment No. 25, the relevant bit of which reads “Businesses should respect children’s rights and prevent and remedy abuse of their rights in relation to the digital environment. States parties have the obligation to ensure businesses meet those responsibilities.”
“Member States have the obligation to secure to everyone within their jurisdiction, including children, the rights and freedoms that are laid down in international and European conventions. At the same time, business enterprises have a responsibility to respect these rights and freedoms. Together, States andbusiness enterprises should aim at achieving the right balance between protecting children and ensuring equal access and opportunities of all children in the digital world.”
The report’s nine recommendations
On page 54 the Report’s recommendations begin:
Recommendation 1: Successful prevention and combating of the current forms of Online Child Sex abuse (OCSEA) requires State actors to stay up to date and react to constant technological developments in this area, facilitated especially by the prevalent use of continuously evolving ICTs. The use of automated technology in the fight against OCSEA is, in this regard, essential.
Recommendation 2: To ensure a proper balance between privacy and protection of children against sexual exploitation and abuse fostering a dialogue between private sector companies and policymakers/regulators is of the utmost importance. Such dialogue should primarily aim at securing adequate transparency on the choice of the technology used and processes around its use.
Recommendation 3: Initiatives aiming at improving coordination in this area should be indicated and supported as they are vital to the reliability of the reference databases. In this regard, it is also necessary to secure more clarity on how the accountability mechanisms are managed, including the recruitment and training of individuals employed by private sector companies who are responsible for the assessment of illegal content, such as CSAM.
Recommendation 4: To better maintain a balance between privacy and protection of children against sexual exploitation and abuse, defining the proper level of safeguards should take place as early as possible in the process of development of technology. Policymakers and regulators should place particular focus on the dataset used by that technology to train complex combinations of algorithms.
Recommendation 5: In order to enhance privacy while prioritizing protection of children against sexual exploitation and abuse it is necessary to promote technological solutions that are the most efficient for the purpose considered.
Recommendation 6: Initiatives oriented at cross-sectional dialogue should be identified and supported.
Recommendation 7: The weight that is accorded to positive obligations against OCSEA under international and European human rights law, bearing in mind the best interest of the child, needs adequate appreciation in the legislative debate going forward.
Recommendation 8: Acknowledging the current legal lacunae, consideration should be given by CoE Member States to the need for a harmonised and sustainable legal framework which can provide legal certainty to SPs and address future technological developments.
Recommendation 9: The CoE Member States are strongly encouraged, in line with their positive obligations to protect children against OCSEA, to establish a public interest-based framework grounded in the Lanzarote Convention, enabling SPs to automatically detect, remove, report and transfer OCSEA-related information under data protection and privacy conditions and safeguards…….
Fighting yesterday’s battle?
That sub-heading is not intended as a criticism because I recognise, without reservation, the great scholarship and thought that has gone into preparing the report. With its abundant references it will be an immensely useful tool for many child rights advocates in the months and years ahead. But let me quote from the report.
On page 5 this appears:
“While recognising the benefits a mandatory regime could bring, this report focuses on the practice of voluntary detection and voluntary reporting of OCSEA by service providers….. As a consequence of this approach, the choice of technological solutions analysed in this document was limited to this context“. (emphasis added)
And on page 8:
“…recognising the benefits a mandatory regime could bring in terms of legal clarity and certainty, ……. this report focuses on the practice of voluntary detection and voluntary reporting…” (emphasis added).
In other words the report looked at the status quo and what was going on yesterday.
Things have moved on. We need to know more about the options for tomorrow, their strengths and limitations, both legal and technical. As the report itself hints in the two quotes I have just given (and in many others I have not given) voluntarism poses many problems and mandatory is likely to be better.
Villains and angels alike
If we have learned anything from the 25-year plus failed experiment with internet self-regulation it is that unless something is mandatory too many companies will not do it at all or, even allowing for the overriding principle of proportionality, their levels of commitment will vary enormously. There will be inconsistencies which owe nothing to alleged complexities or differences between platforms but owe everything to differences in the attitudes and the priorities of senior managements in different companies.
And if there is scope to shelter behind any degree of legal immunity or uncertainty it will provide a safe haven for villains and angels alike.
Moreover, absent an acceptable transparency regime (a point referred to many times in the Council of Europe report) how can we be confident anyway that we know what companies are doing or have done, even those with self-affixed halos permanently hovering above their heads?
Meanwhile back at the ranch
Going full circle to the reason for the report, the European Commission has since published its proposal for a Digital Services Act (DSA). In Article 6 they make clear relevant service providers will not lose any legal immunities “solely because they carry out voluntary own-initiative investigations or other activities aimed at detecting, identifying and removing, or disabling of access to illegal content…”.
Grooming not covered?
Article 6 thus clarifies and establishes a legal basis for what many had assumed was the status quo, which is good, but by referring only to “content” it appears not to include attempts to detect illegal behaviour such as grooming. Which is not so good.
Article 7 is the big problem
Article 7 expressly says
“No general monitoring or active fact-finding obligations
No general obligation to monitor the information which providers of intermediary services transmit or store, nor actively to seek facts or circumstances indicating illegal activity shall be imposed on those providers.”
Why is the Commission suggesting platforms are allowed to be indolent or indifferent to any law breaking which they are facilitating? Why are they saying this when they know it is online intermediaries which are the largest facilitators of online crimes against children?
Elsewhere in the DSA there are nudges which will encourage companies, especially the larger ones, to look into illegal content and behaviour which threatens children, particularly where (as would almost always be the case,) this breaches their own terms and conditions of service. And yet, and yet.
Article 7 – a magnet and a multiplier for the feckless and reckless
Surely where the technical means exist which would allow the detection of illegal behaviour towards children, its use should be mandatory not optional?
I say that while remaining mindful of the overarching rule of proportionality and insisting always that all privacy rules are properly respected.
If or when stronger rules kick in governing safety by design one would expect the necessity to mandate protective measures of this kind will diminish because they have been designed out of the devices or Apps. However, we are a long way from reaching that happy state.
It’s important to get things in perspective
When I walk into a public building I normally have to go through a metal detector. If it spots any metal I have to stand ready to show or explain what it is. Otherwise if I have no metal I have to show or explain nothing, nothing is recorded. No data about me is processed. I remain data intacta (to coin a phrase).
It’s the same at airports with my suitcases and backpack. Sniffer dogs and scanners are looking for suspected contraband or prohibited items. Are my rights to privacy being violated? I don’t think so, or if they are it is minimally and for a wider social good. If airport officials or the police zone in on me they are doing so because the scanning or sniffing provided reasonable grounds to suggest I might be a law-breaker. Absent such indications I go on my way, again data intacta.
If we can reproduce systems such as these online why wouldn’t we? If it is only because of a lack of trust in the supervisory arrangements which would need to guarantee the integrity of the processes involved we should address that not use it as an alibi for inaction.
Deal with it later this year? Hmm
We are all being told that “later this year” the Commission will publish a proposal to establish a policy for the longer term concerning the points raised around the interim derogation and more widely in respect of the strategy to combat child sexual abuse, online and off. All the points I have just mentioned can be dealt with then. Seemingly. But really?
A few things to say about that:
“Later this year” is a moveable feast.
“Later this year” MEPs and others could be sick to death with anything and everything digital. Their attention might slip as other pressing matters compete for their time.
Are we seriously being asked to believe that only months after European Institutions decided not to impose any obligations to “actively seek facts or circumstances indicating illegal activity” they will then reverse themselves and allow a cross-cutting and overarching exemption for children? I have my doubts so let’s have it out now or at least put down major markers for when the EU institutions do come to consider a way forward. Assuming they actually do.
In other words “later this year” could mean the current, highly unsatisfactory status quo might continue for a very long time. We do not want that to happen.
“Put it out Tuesday. Get it right by version three”. According to Professor Ross Anderson this was the dominant way of thinking in Big Tech long before Mark Zuckerberg’s updated nostrum – “move fast and break things”. Anderson suggests the reason this approach became prevelant in the industry, and IMHO continues to be, is because of the advantages gained from the famous “network effects”. Being first or early can be decisive in building up market share, providing a springboard for further investment and development.
If all you are doing is writing software for a machine to make better sausages I can step aside and leave you to it. Probably. But if you are designing products or services intended or likely to get into the hands of children you cannot be afforded any such leeway.
You would have thought, with products or services aimed at children or likely to be used by them, nobody would really need to be told to “think it through – look at all the angles, watch out for possible pitfalls, dangers to children, and stop them from happening.” Yet how many disasters have we had to face, particularly around the internet of things, even including toys? Let’s not get into the “adjustments” Google, Facebook, TikTok, Snapchat and the rest have had to introduce. By which I mean retrofit.
From today no tech company anywhere in the world has an excuse not to get it right first time. They need only look to Australia. The country’s e-Safety Commissioner has produced an absolutely stunning set of easy-to-use, free-to-use tools. Because they are aimed specifically at products or services which provide opportunities for social interaction they embrace internet users of every age and disposition but their particular relevance to children hardly needs further elaboration.
I have a sneaking feeling the tools will be used by every type of business or organization that works in digital spaces or builds products which can connect to the internet. To quote Commissioner Inman Grant they help “embed safety into the culture, ethos and operations” and those are things everyone should be concerned about in all types of undertakings.
The e-safety packages from Down Under are destined to become a must-have. There’s one aimed at start-ups and one for the mid-tier or enterprise level. Industry was closely involved in preparing the tools so there is zero fat or flab, highly tailored to be optimally usable by a wide range of people engaged in bringing products to market.
Henceforth, as part of due diligence, every would-be investor will or should demand to know that what they are being asked to put their money into, lend their name and reputation to, is following the processes and steps which the Antipodeans take you through with exquisite care and attention to detail.
I have had a demo. Companies and not-for-profits will be beating a path to the Aussies’ virtual door. Or they will if they have any sense.
The United Nations Convention on the Rights of the Child is widely accepted as the cornerstone or foundational document in respect of children’s rights. Every country in the world bar one has signed and ratified the Convention. The highly regrettable exception is the United States. The reasons are complicated, nevertheless the USA has endorsed two of the Convention’s Optional Protocols. However, the fact that it has not fully signed up for the main event illustrates an important point.
Merely adopting the Convention should not be taken as a guarantee that in every signatory country the position of children cannot be improved. By the same token, not signing it cannot be taken to imply children in that country have no rights and are therefore bound to suffer an endless cycle of privations.
The Convention enshires and describes legal rights but it is not written in what is commonly understood to be legal language. It is a political document written in political language, adopted by politicians in a political institution, the United Nations. It sets out eternal values and timeless principles. It does so using high level language. That is because the UN, of all places, recognises we live in a world dominated by nation states with a rich variety of cultures.
The real significance of the Convention is therefore twofold. It establishes standards by which any and every country can or ought to be judged and held accountable both by the international community and by their own citizens. And it acts as an important reference point or guide to national governments, private entities and individuals everywhere, perhaps particularly or above all children themselves.
The Convention is a pre-internet document
The Convention was adopted in 1989 following a gestation period of at least ten years. The authors could not have forseen how children’s lives were about to be significantly impacted by the digital revolution which was just around the corner. Nobody could have. Nobody did.
Today if we were starting from scratch in key places the language we would use would be different. Maybe not dramatically different, but definitely different.
The General Comment rides to the rescue
Any idea of changing the text of the Convention was viewed as impractical. It would just take too long and with the current strains in geo-politics who knows where it might end up anyway. Nevertheless, recognising the importance of updating the context within which the Convention should be read by affected parties, the Committee on the Rights of the Child, the UN’s guardians of the Convention, began a process of writing what we now refer to as “General Comment 25”. Actually its full title is“General comment No. 25 (2021) on children’s rights in relation to the digital environment.”
It took over four years to prepare, consult on and adopt the General Comment. The Committee was helped and advised by some very smart and knowledgable people led by Beeban Kidron of 5Rights , with Sonia Livingstone as the lead author, aided by Gerison Lansdown, Jutta Croll and Amanda Third who, in turn, talked to other very smart and knowledgable people from 28 different countries, involving hundreds of organizations and hundreds of children. The final text was agreed and published in March 2021. 5Rights produced an excellent commentary.
There seems little point in me reciting the content of the General Comment. If you go to the link provided above you will see it is well laid out, presented in highly accessible language and it is not very long. The writers had to conform to the UN’s prescribed standard length for documents of this kind.
The General Comment makes explicit that children’s rights apply in the digital environment every bit as much as they do in the physical world. Adjustments have to be made to accommodate the manifest differences between the two spaces but to the greatest extent possible there should otherwise be alignment. The notion of “internet exceptionalism” is expressly rejected.
In concluding I refer to what I think is one of the most important bits of General Comment 25.
Section I, paragraphs 36-39 draws on earlier platitudinous puffs from elsewhere but sharpens things up in a completely new and brilliant way, neatly reflecting much that appears in the Council of Europe’s “Handbook for policy makers on the rights of the child in the digital environment.” Sonia Livingstone is the common link between the two with, in the latter case, myself and Professor Eva Lievens also lending a hand.
Here is 36-39 in full:
Children’s Rights and the Business Sector
36. States parties should take measures, including through the development, monitoring, implementation and evaluation of legislation, regulations and policies, to ensure compliance by businesses with their obligations to prevent their networks or online services from being used in ways that cause or contribute to violations or abuses of children’s rights, including their rights to privacy and protection, and to provide children, parents and caregivers with prompt and effective remedies. They should also encourage businesses to provide public information and accessible and timely advice to support children’s safe and beneficial digital activities.
37. States parties have a duty to protect children from infringements of their rights by business enterprises, including the right to be protected from all forms of violence in the digital environment. Although businesses may not be directly involved in perpetrating harmful acts, they can cause or contribute to violations of children’s right to freedom from violence, including through the design and operation of digital services. States parties should put in place, monitor and enforce laws and regulations aimed at preventing violations of the right to protection from violence, as well as those aimed at investigating, adjudicating on and redressing violations as they occur in relation to the digital environment.
38. States parties should require the business sector to undertake child rights due diligence, in particular to carry out child rights impact assessments and disclose them to the public, with special consideration given to the differentiated and, at times, severe impacts of the digital environment on children. They should take appropriate steps to prevent, monitor, investigate and punish child rights abuses by businesses.
39. In addition to developing legislation and policies, States parties should require all businesses that affect children’s rights in relation to the digital environment to implement regulatory frameworks, industry codes and terms of services that adhere to the highest standards of ethics, privacy and safety in relation to the design, engineering, development, operation, distribution and marketing of their products and services. That includes businesses that target children, have children as end users or otherwise affect children. They should require such businesses to maintain high standards of transparency and accountability and encourage them to take measures to innovate in the best interests of the child. They should also require the provision of age-appropriate explanations to children, or to parents and caregivers for very young children, of their terms of service.
My dad was 13 years old when the Second World War broke out. He was Jewish and lived in Poland. Yet he survived. I have written a book about how he did that. Brought out last year by a small Edinburgh-based publisher, the book is now being republished in the UK and English-speaking Commonwealth by Hodder. On 24th June it is coming out on Kindle and as an Audible, narrated by Sir Simon Russell Beale. On 29th July the hardback will be available. Pre-ordering open now. It will also be available in translation in, so far, six different European languages (Spanish, Italian, Dutch, Danish, Romanian and Polish) with more to come I believe. There will be a US edition in hardback in March next year.
Hope you order it, like it, write reviews on Amazon and elsewhere about it, and tell your friends. I trust you will not only find the story interesting and exciting but you will also pick up on the many modern resonances.
Now back to the day job with apologies for this outburst of shameless self-promotion.
I hate to be unfair to Facebook (please don’t tell anyone I wrote that) but they are in the major line of fire in the encryption debate only because we know the scale of apparently criminal behaviour taking place on two of their major messaging platforms, namely Instagram Direct and Facebook Messenger. At least we know it in respect of offending against children because Facebook has for several years deployed tools to detect and report it. And the scale is huge. Which is why what Facebook does in this area is so important.
I make this point because when we finally reach an agreement it should be an agreement which embraces all messaging platforms, not just Facebook. Level playing fields matter. We must not build in commercial incentives to help crooks.
Is it likely Facebook is singularly afflicted? Er, no. Last week we got an insight into why I say that, thanks to the cops in Australia, the FBI and other national police services.
The police created a dummy messaging App and customised phones
Operation Greenlight/Trojan Shield involved creating an App called “ANOM” which was put on customised phones then marketed to criminal organizations and indidivuals by a major underworld figure who had been “persuaded” or paid to help law enforcement.
Planning for the operation began in 2017. Police closed down two messaging platforms that were known to be being used by large numbers of criminals. This created an opening in the market for ANOM and when the moment arrived it was ready.
The customised phones could not send or receive phone calls. They had no camera. The only usable App on it was ANOM and it could only message other ANOM users. As the Australian Federal Police put it “Criminals needed to know a criminal to get a device.” Eventually 11,800 of them were distributed.
Big scale police action
Reports of the results of the police action seem to vary slightly betweeen different media outlets but they are all roughly in the same ball park.
My numbers come mainly from the Washington Post. Police officers in 17 countries took part in the operation which stretched over eighteen months. During this period 27 million messages were exchanged via ANOM and the FBI saw every single one. They could see them in real time.
9,000 police officers were involved in sifting the messages which were exchanged in 45 different languages. The countries with the most users were Germany, the Netherlands, Spain, Australia and Serbia although, in Europe, the largest number of arrests was made in Sweden (75). In total over 800 people were arrested in 17 different countries. The Australian police made 225 arrests. As one police officer put it, an important objective of the action was to undermine criminals’ confidence in messaging apps. I think they have achieved that in spades.
And what did the police see?
As a result of their access to the messages the senders and receivers thought were safely encrypted, law enforcement officers were able to seize eight tons of cocaine, twenty two tons of marijuana, two tons of methamphetamine and amphetamines, 250 firearms, 55 luxury vehicles and more than US$ 48 millions in cash and cryptocurrencies. Murders and kidnapings that were being planned never happened. Nine police officers were arrested because they were found to be in cahoots with the bad guys.
A telling touch
The creators of ANOM even came up with a marvellous marketing strap line to help move the product: “Enforce Your Right To Privacy”. Yes even the criminal underworld is suspectible to good messaging.
What more eloquent but depressing testimony could there be? How have we allowed an important human right, privacy, to become so grotesquely abused and used by some of the world’s worst purveyors of evil and death?