Yesterday two of Britain’s top GCHQ spooks published “Thoughts on child safety on commodity platforms”. “Commodity platforms” includes but is not limited to social media and their associated messaging capabilities.
The substantive document might be heavy going for a lot of people, its target audience is mainly the techie world, but there is a much more readable and succinct account in a Lawfare blog and The Times carried a piece which neatly summarises the position.
“Is it possible to reconcile encryption and child safety?” is the title of the Lawfare blog and the short answer is “yes it is”. We have known this for some time, but it is great to have it publicly confirmed by such an authoritative source and in a way which will allow the geek community to go over the case in minute detail.
Not a binary choice
If there is one key take-away from “Thoughts on child safety” it is this: contrary to some of the more hysterical scaremongering of elements of the so-called privacy lobby, the report shows we are not confronted with a future where we have a simple binary choice between creating safe spaces for child abusers or creating insecurity by default for everyone else.
What is challenging about the report is how (and I think this is a world first) it goes beyond and criticises the shortcomings of some of the existing approaches to online child protection, saying they rely too heavily on artifical intelligence and machine learning. The spooks say the complexity of the issues facing children in the online space means such techniques have only “limited effectiveness”. They do not suggest these approaches should be dropped. They are doing a good job but we are pointed to additional considerations which could produce even better outcomes.
Meanwhile in other news
With exquisite timing, surely a coincidence, not, the week before “Thoughts on child safety” hit the headlines a report prepared by Alec Muffett for Privacy International appeared. There just has to be a competition going on in geekland. A prize must be awarded annually to whoever comes up with a title most likely to persuade people not to read the document concerned. Run to the bookies and put a ton on Muffett’s “A Civil Society Glossary and Primer for End-to-End Encryption Policy in 2022”. This make the GCHQ offering look positively populist.
Having said that, the substantive offering is elegantly if densely written by Muffett, comprehensive to the point of being way too long for mere mortals. The only problem with it is its core thinking or tenets are simply either wrong or unacceptable.
I am reliably informed Muffett is a highly skilled coder who has done important technical work in the course of a long career. However, and I have made this point many times, that does not give him or any other geek prior or greater rights when it comes to expressing an opinion about how society should function. Inefficient and annoying as it can sometimes be we live in a democracy, not a technocracy. We listen to expert opinion respecfully but a techno-Priesthood of the Sealed Knot has no veto.
Muffett himself acknowledges this when he protests about being
“dragged into semantic or architectural, rather than political debate regarding privacy” ( emphasis added).
I couldn’t agree more.
We have to decide what kind of world we want to live in and make tech our servant not our out-of-control or beyond-our-control master.
He also wants to change the terms of the debate. Per Muffett what we have always understood end-to-end encryption to be is no longer adequate because it excludes the possibility of doing what Apple and others have proposed, namely looking at stuff that has triggered a red flag for likely criminal content, and this can only be done before it becomes encrypted. What we would end up with, seemingly, according to Muffett is a
“potentially abusable listening device”
That is a tendentious, polemical way of framing the debate but, even if it were true, doesn’t it rather underline the importance of devising ways of managing these processes so as to ensure that the apparently worrying potential is never realised? This means the systems are established in ways which allow them to be properly supervised and managed with appropriate forms of transparency and accountability. In other words in ways which allow all but the most paranoid or delusional to feel comfortable with the service they are using.
One way of looking at the history of the internet is as an evolution in which we learn, improve and move on. We insist on prudent risk assessments but we don’t bring things to a full stop because of unlikely or highly improbable hypotheticals.
In my view, in respect of any proposed technical approach designed to protect children on the internet the first question must be
“can this solution do what it says on the tin and how how can we reassure people it does nothing else?”
Talking in a field
At the root of Muffett’s thinking, and he refers to it repeatedly, is what he calls the “Field Model”. We are told that
“All that was once necessary for two or more people to have a private conversation was for them to walk into a field – away from eavesdroppers – where they could simply talk…
The aim of policy should be to recreate those conditions. At this point I am almost tempted to throw my hands up and surrender.
I have written before about how applying analogue thinking to a digital world leads straight to disaster. Applying agrarian, antediluvian thinking to the digital world gets us there even faster. See above.
I repeat: we are not faced with a binary choice of willingly, or stupidly, letting in an intrusive state willing to behave illegally by spying on people’s private communications or granting impunity to paedophiles.
All the new, smart child protection systems do is pick up evidence of likely criminal behaviour, sufficient to justify closer inspection to confirm, or otherwise, that criminal behaviour is in fact afoot. No sign of criminal behaviour no further action is taken, nobody’s reputation is harmed. Nobody’s messages are opened up or examined.
But we should be clear substantial majorities of people in very different countries are willing, if it comes to it, to surrender some of their privacy if it meant children are likely to be safer. What the spooks tell us is they don’t have to make such a choice. All we need now is for Big Tech to get on board, stop their lobbying in the opposite direction and help make the internet a better, more trusted space for all of us, but particularly for our children.