If you read the blog I posted yesterday you will see that the European Union is discussing how to make it harder for companies to engage proactively in removing child sex abuse materials from their platforms. Today Facebook shows us just how powerful a tool technology can be in doing precisely that. I trust this will send a strong message.
On their global platform in Q3 2018, that’s the three month period ending on 30th September, Facebook detected or received reports on 8.7 million pieces of content which violated their policy on child nudity or child sexual exploitation and were removed. 99% of these were detected without having been reported. This does not necessarily mean the images were gone even before anyone other than the originator saw them, but that could be the case.
Not all of the images would be illegal but…
Because Facebook has such a strict policy on nudity it is likely not all of the images removed would be illegal under English law, or US law, consequently not all of these would have been reported to NCMEC, nor would they necessarily have been found to be illegal by the IWF.
A great many are likely to have been within the “grey areas” as Jutta Croll and her colleagues in Germany have been saying for some time, thus proving there are things that can be done to address a problem many thought was impossible to get at. Gut gemacht. Even so it is likely a substantial proportion of the images Facebook removed were illegal in lots of jurisdictions.
It doesn’t stop there
Facebook also announced today they are developing software to help NCMEC prioritize reports they pass on to law enforcement for investigation. The idea is to give the police the heads up on which are more likely to be linked to the most serious cases. And Facebook and Microsoft are teaming up to create tools which will be made available to smaller firms to help reduce bad or illegal behaviour.
With an echo of Google’s earlier announcement about its “image classifier” Facebook too is developing ways to help find images that have not previously been seen by human eyes and determined to be illegal or contrary to a platform’s policy.
Technology solving problems technology has created
Facebook tells us only 1% of the images they removed were the product of a report made by a human. Google told the same story when they released similar data. Technology is solving a problem technology had created.
When you set these sorts of numbers and percentages against what has been achieved by systems which rely on humans making reports you don’t have to be Einstein or a weather forecaster to see which way the wind is blowing. Project Arachnid hints at what the world might look like soon.
And when you read things about the terrible distress suffered by the poor souls who work as content moderators, we must all hope companies will bring it on as quickly as possible.
Expecting people, too many likely living in far flung corners of the planet, probably employed on minimum wage rates perhaps with little local support, to have to sift out and remove the most gross, violent or revolting portrayals of the worst aspects of some human beings’ behaviour, is hard to defend.
Yes we need safeguards and clear parameters to govern machine driven processes. AI is never likely to be 100% perfect so humans have to stay involved to resolve the marginal cases and maintain the system. Yes we need genuine transparency so we are all reassured things are working as intended. But today all I can say is well done Facebook.