Remember when Facebook point blank refused to tell the UK Government anything meaningful about a range of issues connected with children’s use of their platform? We have just had an even bigger example of very similar behaviour, involving at least ten more companies. Mind-bogglingly, this time the UK Government seems to have paid to be insulted. At least when Facebook told them to get lost they did it for free.
Missed by a mile
As part of its work on the Online Harms White Paper (OHWP) the Government commissioned a consultancy company, ICF, to carry out some research. Full disclosure, I am working with ICF on another, unrelated project. I know them to be completely professional and competent but like any consultancy they can only work within the brief they are given and with the responses they receive to the questions they pose.
In this case ICF was asked by DCMS to carry out“Research into Online Platforms’ Operating Models and Management of Online Harms.” The results were submitted in June but have only just been published.
The objectives of the research
Here are the declared objectives of the project
“ICF was contracted by the UK Department for Digital, Culture, Media and Sport (DCMS) to undertake research into how online platforms operate to tackle online harms, determining their incentives and capabilities for doing so and how they could adapt to potential regulation.”
The study’s findings were meant to be based on a combination of analysing information that was already in the public domain and information gleaned directly from relevant online platforms.
More specifically the research aimed
- To understand what different platforms define as harm on their platforms;
- To understand the incentives, capabilities and methods of different platforms to
address online harms, including technical and economic capabilities - To measure how effective these incentives and capabilities have been at reducing
harm - To understand the potential impact of regulation on these platforms
Judged by those objectives, with the possible exception of the first point, I’m afraid the report missed by a mile. The reasons are obvious, particularly when you read Appendix 3 (page 49) which lists a plethora of “limitations” the researchers had to confront, the principal one being the platforms’ unwillingness to tell them anything new or important.
Companies told the researchers nothing we did not already know
The companies that agreed to talk to the researchers (see below) disclosed nothing that wasn’t already known. You can dress that up in whichever way you like, call it “qualitative research” until the cows come home. It still leaves us a long way short of the manifest ambition of the terms of reference.
That said there are some good and useful bits. These were down to the ICF’s diligence in sifting through and analyzing publicly available sources.
This is how it went
After completing an inception phase, an initial pool of 25 platforms was identified as being within scope. We do not know who was included in the 25 or why. The 25 were later whittled down to 11. There is no explanation as to how or why the “whittling” was done. However, I must say I was surprised to learn the researchers were able (page 3) to examine 11 “transparency reports”. I never knew there were that many out there but since we are never told who the 11 companies are we are left in limbo.
The 11 platforms that constituted the final sample were allocated a number.
All 11 platforms were interviewed and sent an online survey. Only 4 companies (1, 4, 9, and 10), less than half, completed the survey. Platform 7 submitted a narrative response.
Platform 5 did not complete the survey and did not submit a narrative response but, like everyone else, they did participate in an interview. Later they would not allow ICF to refer to or use any information they shared in the interview. Does that count as a response?
Look at footnote 2 at the bottom of page 3 where the following statement appears
“…. due to the limited responses received, conclusions are not drawn based on survey data, although information provided through the survey was still valuable.” (emphasis added).
Hmm.
Anonymity rules
Curiously, there is no discussion about why ICF agreed to or offered anonymity.
It is difficult to imagine the researchers did not look at or try to speak directly to Facebook, Google, Twitter, Snapchat and a few other obvious ones but the truth is we don’t know who any of them are. The paying customer, HMG, does not know who the 11 are. An Advisory Board of three was established to help the project. I asked two of its members if they knew the identities of the 11. They didn’t.
Actually, insiders will know the name of at least one of the companies that co-operated, and it is not one of those I just mentioned. Insiders will know the company because the report gives a description of the professional background of one of the people they spoke to and she is completely unique. It is a pity her uniqueness was not pointed out because, in a sense, the others are undeservedly basking in her reflected glory.
You can argue about protecting information that was given by a fully co-operating company, but by what strange logic do we end up protecting the identities of companies that did not co-operate, for example by completing the survey? Did they all insist on anonymity in return for taking part in the initial interview? We should be told.
Maybe Machiavelli was at work?
Presumably the UK Government approved this overall approach? I therefore have thought of one possible way of looking at this sorry story which casts them in, if not exactly a favourable light, at least in a different one. The UK Government set a trap and big tech walked straight into it.
If most of the platforms had genuinely co-operated, at least to the extent of completing the survey, HMG might have been put in possession of useful information that could have helped modify, shape or seriously influence their plans but if, as things turned out, the companies refused, Ministers were handed an additional casus belli. Moreover, by not naming the platforms that did not complete the survey suspicion is cast on all of them. That seems unfair.
Thus, if there is one obvious and stark conclusion to be drawn from what happened with the ICF project it is that we urgently need a Regulator with the legal powers to compel platforms to answer questions. I know this was in the OHWP but as lobbying begins to try to water it down the results of the ICF study greatly strengthens the Government’s hand.
So what did the report actually report say?
As already stated, unquestionably there is some useful information in the report. The descriptions of the position in France, Germany and Australia are one example (pages 25-30). Less impressive is reporting as a statement of fact (page 41) that “codes of conduct in relation to children as victims of online harm were in place in 25 EU member States”. Aside from that reference being nearly ten years old, knowing that there is a piece of paper is not the same as knowing that what it seeks to achieve is being delivered. This whole area is littered with unchecked, unverified and unconfirmed declarations of good intent, and yet we are where we are.
This lacuna is further highlighted when one turns to the section of the report on terrorism (page 43) where this appears
“….the impact…. of the EU Code of Conduct concerning illegal hate speech shows that…. if evaluated and closely monitored (systems can work to a satisfactory standard).
Well quite. Isn’t this a screamingly important statement? Terrorism stands alone in receiving such careful, consistent attention and scrutiny. I’m glad terrorism is taken so seriously, but isn’t there an obvious follow-on question? Why hasn’t child protection been given similar treatment? But that was yesterday. OHWP is about tomorrow.
Anything else?
Strap yourself in. You will be shocked and astonished. Not. Here are a few choice quotes from platforms:
“A commonly held view… is that non-legal (including self-regulatory) obligations and measures have been more impactful and helpful in shaping approaches to tackle online harms than legal obligations have.”
“Three platforms stressed that self-regulation triggers dialogue and mutual monitoring between stakeholders (including platforms, governments and civil society) something which is perceived to be necessary in such a fast-changing environment.”
“When it comes to the platform’s (sic) approach to tackling online harm, self-regulation was reported to be more incentivising than legal regulation, which platforms understand can be overly prescriptive.”
“Platforms perceive that self-regulation allows them to more effectively tackle online harms than legal obligations.”
“Platforms report that self-regulation is positively impactful to the approaches taken by platforms to tackle online harms as they encourage constant iteration and knowledge sharing in this area, whilst also helping ensure that platforms are agile enough to tackle harms which are new or emerging.”
Those limitations
Appendix 3 sets out the valiant steps taken to try to mitigate (work around) the “limitations” the researchers had to address.
This blog is already too long so I won’t list them all but a brief perusal will show you that many online platforms are desperately seeking to hang on to as much of the failed past as possible. That is foolish beyond words.
The UK may be out front on these issues but I cannot think of a single liberal democracy where similar impulses are not in play, and for more or less identical reasons. Trust in online platforms has pretty much evaporated. Being asked to take an internet company’s word for something, almost anything, just will not wash any more. Think banks, energy suppliers, insurance companies, telephone companies, the food industry, TV. That’s the territory we are in now.
The way online platforms and their surrogates have responded in the UK so far will only harden the resolve of the UK Government and Parliament to press ahead.
In an ideal world Governments would bring forward legislation after the evidence had been gathered but, as we have just seen here, and before, online companies have refused to play ball. I think they do that in the hope either that politicians will move on to something else or in the hope they can prolong the status quo for as long as possible to keep the cash rolling in. Decisions of this nature are not taken by the people on the frontline who deal with online safety issues day in and day out. They are taken at the very highest level within the business.
Haughty denunciations of Governments’ and politicians’ ignorance, or of their “emotionally driven” or “media driven” agendas display a technocratic contempt for the way democracies work. This really helps no one and says more about the speaker than the spoken about.