Joy tinged with anger

At 5.00.a.m. today the Head of Instagram published a blog entitled “An important step towards better protecting our community in Europe”

There is much that is important and of interest in Facebook’s blog so please read it but here, for me, are the key sections:

“We use technology to help.. proactively find and remove..suicide and self-harm content…Between April and June this year, over 90% of the suicide and self-harm content we took action on was found by our own technology before anyone reported it to us. But our goal is to get that number as close as we possibly can to 100%. 

Until now, we’ve only been able to use this technology to find suicide and self-harm content outside the European Union. 

European children deprived of protection

So children and young people everywhere else in the world have been benefitting from Facebook’s deployment of proactive tools which help stop young people killing or harming themselves. Children in Europe haven’t been. Why?  To answer that we have to look to the Irish Data Protection Commissioner (DPC).

Seemingly, having started monitoring this type of content in 2017,  Facebook raised the matter with the DPC back in March 2019.  The DPC “strongly cautioned Facebook because of both privacy concerns and a lack of engagement with public health authorities in Europe on the initiative.”

Facebook followed the DPC’s advice and consulted with health authorities. Nevertheless   the DPC still said “concerns remain regarding the wider use of the tool to profile users.. culminating in human review and potential alerts to emergency services”.

You might want to read that again. It’s hard to believe  anyone could be anxious about the possibility an ambulance or a police officer could go knocking on a door in the expectation of saving a life and for that to be frowned on or obstructed. Certainly in the UK we are constantly told to contact the emergency services if we have any reason at all to suspect someone is in danger, particulary if that someone is a child.

Just to remind you, in the GDPR and in every legal instrument I know, the position of children is said to require extra care and attention. Yet  it is starting to feel that whenever a traditional privacy lawyer writes or drafts something things end up all wrong. Go figure.

And by the way there are no issues of principle associated with Facebook sending a message to the police or the ambulance service if someone has made an individual, manual report to them about a person they believe is at risk.  It is only if the tools are deployed proactively, at scale, that the DPC gets agitated.

So a malicious  or mischievous report gets acted on, while a genuine one can’t be found by a machine. Where’s the logic in that?

Have we taken leave of our collective senses?

Could the tragic death of Molly Russell have been avoided if these tools had existed then? Who can say? But equally I am certain I will not be alone in wondering what kind of world we are creating if, in the name of privacy, we allow these things to happen when we had the possibility of stopping or reducing them.

We have been content to allow the internet to do things that not many years ago would have seemed utterly unbelieveable. Saving children’s lives? That’s where we draw a line?

Emotional? Too right it’s emotional

I have heard it said that we shouldn’t be too emotional about these questions. Excuse me. What that is actually saying is we should detach ourselves from our humanity. It hardly matters to me what impact technology might have on a lump of concrete or other inanimate object but, if you have it within your power to stop pain, death or suffering by another human being, only a dessicated robot could turn away and say “no”.

The technology that has built huge fortunes for entrepeneurs  and pays vast salaries to its employees who know the colour of your socks, where you go on holiday and what you eat for breakfast cannot be turned to saving lives? I understand about “balance”  and “safeguards” but whenever I hear those words what I am usually hearing is “no” again.

It’s not about privacy. It’s about trust

The mantra of the internet has been about innovation and the wonderful benefits technological advances can produce.

So now technology allows us to detect when a child is contemplating killing themselves.  We have technology which allows us to detect when a paedophile is attempting to groom a child. We have technology which can help protect the privacy rights of children who have been raped and further humiliated by having images of their rape broadcast to the world.

Why would we not use them?

Because some people do not trust Big Tech to use these tools lawfully i.e. in ways which do not exploit people’s  data in a manner that the law anyway already forbids.

The real answer, therefore, is to address the lack of trust in Big Tech. And that means addressing transparency. And the fact that our politicians and institutions have so far failed to do this is no reason, now, to make those tools illegal. That is treating a symptom not the disease. We need to get at the disease.

My next blog

I fear my next blog will not be a happy one either.  Yesterday we had great news about LIBE agreeing to take the item on 16th November and that remains the case. But other things have happened  today. Watch this space. It ain’t over ’til it’s over.

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International and is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John is now or has formerly been an Adviser to the Council of Europe, the UN (ITU), the EU and UNICEF. John has advised many of the world's largest technology companies on online child safety. John's skill as a writer has also been widely recognised. http://johncarrcv.blogspot.com
This entry was posted in Child abuse images, Default settings, Facebook, Privacy, Regulation, Self-regulation. Bookmark the permalink.