“Ordinary sensibilities” and related matters

A Joint Scrutiny Committee of the House of Commons and the House of Lords was established to look in detail at the UK Government’s draft Online Safety Bill with a view to seeing if or where it might be improved before being tabled in Parliament, probably in February or March, to begin its legislative journey.

The Joint Scrutiny Committee was constituted last Summer. Its report became available on 14th December. They did an excellent job, and I will write more about it soon. You will be glad to know it won’t be a very big blog because I agree with so many of the Committee’s recommendations. I hope the Government does too. We shall see.

There was, however, one bit of the report which caught my eye because it touched on a larger conversation which has been going on for a while and needs to be resolved. Maybe “tidied up”, rather than “ resolved”  is a better way of putting it because, at root, I don’t think there is likely to be much controversy. What we are looking at here is yet another example of the friction and challenges caused by having to interpret laws drawn up in an analogue age when we are all now living in a digital one.

The key paragraph of the Joint Scrutiny Committee report to which I refer is 199:

“We heard that the definition of harm to children would benefit from being tightened. In particular, the Government has decided not to use the established formula of a “reasonable person”, instead going with a relatively novel formula of a “child of ordinary sensibilities”. In devising their proposed reforms to the Communications Offences, the Law Commission rejected “universal standards” such as a reasonable person test for establishing harm to an individual. They took the view that there are too many characteristics that may be relevant to whether communications may be harmful to apply such a test, as (Gavin Millar QC) put it: “In reality there is no such person. Some people are more robust and resilient than others.”

Millar is 100% correct. How would a remote service such as an online platform be able reliably to recognise the robustness or resilience of a particular child?  How could a (very often extremely large) distant  internet business know if Jenny or Jimmy had “ordinary sensibilities” let alone be able to distinguish them from Mandy and Freddy whose “sensibilities” were not “ordinary”?  How  might anyone?  Yet the Law Commission’s rejection of any and all forms of universal standards puts online businesses back where we started. In a hopelessly vague mess.  Nevertheless, in fact in the end, as we shall see, the Law Commission does propose a sort of universal standard. And it’s a good one.

Overthinking it

While HMG was getting its proverbials in a twist, in doing so it was connecting with a wider and more ancient discussion about how, in the online space, one is meant to provide for and accommodate a child’s “evolving capacities”  an idea which, in turn, is closely tied to the primacy of the “best interests” of the child.

Gerison Landsdown produced an excellent analysis of the concept of “evolving capacities”. It is beautifully written, published by UNICEF with a chunky Foreword by the then Director of the Innocenti Centre, the  redoubtable Marta Santos Pais.

Best interests and evolving capacities

“Best interests” and “evolving capacities” are two concepts of fundamental importance to the UN Convention on the Rights of the Child (UNCRC). They have influenced and are reflected in a great many other legal instruments and practically every relevant area of public policy which concerns children. Quite right too. However, both emerged in a time when every significant person or entity involved in making decisions which could affect the quality of a child’s life would either already know the child or could get to know them  well enough within a contextually appropriate timeframe.  

Parents, guardians, teachers, social workers, police officers, probation officers, judges and officers of the court, are among the most obvious types or categories of interest the authors of the UNCRC had in mind. 

Can an online platform ever be put in the same position as any of the aforementioned? Can assessments of an individual child’s capacities or sensibilities be built into decision-making systems in a way that would allow a platform to take a reasoned view as to whether or not to allow a particular child to join in the first place? Or, once a member or participant, to determine what types of content or activities fitted their profile and therefore were permitted, and which did not and were therefore not permitted? No. I didn’t think so either.

Just asking the question more or less answers it. Not now nor in any reasonably foreseeable near future would even the most starry-eyed AI-optimist claim systems  will be devised to do the above and even if they could just think of the mountain of personal data someone somewhere would have to collect, process and hold. Frances Haugen told us companies like Facebook already know quite a lot about all of us either from directly obtained data or courtesy of inferred data, and I’m sure that is almost certainly true, but I doubt even Facebook’s cleverness would allow them quite to cover the kind of canvas I have just described. Nor would we want it to.

The only practical answer

I assume nobody is going to suggest closing down all remote platforms or making them adult-only. And leaving aside for now the role that parental consent or permission might play, because that raises a whole raft of other issues which make it fraught with even greater difficulty, it is clear that in remote environments we have to do two things. We must comply with any bright red lines which the law imposes but otherwise we must take age as a proxy for sensibilities or capacities. More to the point we should grant platforms, children and parents the legal certainty that would follow from doing so.

That said, where a remote platform did acquire specific information about the capacities or sensibilities of an individual child, either at the time of signing on or later, they should be obliged to act to accommodate them. In principle that’s probably not very different from the position which exists now, at least in theory, although it must be open to doubt whether or to what extent it happens in practice. 

The Age Appropriate Design Code to the rescue

The UK’s Age Appropriate Design Code contains a set of five age bands as good as any I have seen: 0-5, 6-9, 10-12, 13-15 and 16-17. If a different internationally agreed standard were to emerge  it would be unlikely to be terribly different from this so, in the interests of harmonisation, we could tweak ours and probably sign up to it but don’t hold your breath. This one will do for now. 

In an Annex  to the Code more details are provided to  help online businesses understand what these age bands represent in terms of the likely competencies of children in those groups. Relevant online services are advised they can use the age bands “as a guide to the capacity, skills and behaviours a child might be expected to display at each stage of their development.”

Thus, I am not in any way suggesting we change, amend, fiddle about with or do anything at all which might undermine the best interests and evolving capacities principles. But I think it would be better for everyone if we had greater clarity about what we expect businesses to do and this part of the Code points the way forward.

Ofcom’s role

Under the Online Safety Bill Ofcom is to be charged with developing an overall risk profile for different types of online business and each individual business is expected to develop its own, tailored to its specific offerings. Surely it would make sense for Ofcom to adopt the Age Appropriate Design Code’s age bands as the basis for compiling its risk assessments insofar as they concern children and that ought to become the legal default or presumptive standard for everyone? 

Each individual company’s risk assessment would have to measure up against the default. Any company that wanted to depart from the default would, of course, be free to do so but they would have to stand ready to explain why their risk assessment, including how they interpreted capacities, was superior or equal to the default.

Against such a background it would then be a lot easier to go with the Law Commission’s final suggestion, which was that online platforms should be required to have in mind the “likely harm” that might be caused to a “likely audience”.

Rather obviously, to do this each platform would have to know their actual audiences, but they could not prepare their own risk assessment in the first place if they didn’t. I think that neatly ties the whole thing off. Discuss.

 

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International and is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John is now or has formerly been an Adviser to the Council of Europe, the UN (ITU), the EU and UNICEF. John has advised many of the world's largest technology companies on online child safety. John's skill as a writer has also been widely recognised. http://johncarrcv.blogspot.com
This entry was posted in Age verification, Consent, Default settings, Facebook, Internet governance, Privacy, Regulation, Self-regulation. Bookmark the permalink.