In the blog associated with this one, perhaps I should have made clearer that reports to NCMEC began rising steeply following the launch of PhotoDNA in 2009. It proved transformative. Over time, major technology companies—including Google and Facebook (now Meta)—adopted PhotoDNA and also went on to develop their own complementary technologies. Specialist cybersecurity firms began to enter the space.
PhotoDNA marked a turning point, providing the potential to shift the landscape from reactive moderation to voluntary proactive detection, dramatically improving the industry’s ability to identify, report and remove csam as well as prevent it being reuploaded, helping break the whack-a-mole cycle.
2008 was therefore the last pre-PhotoDNA year.
The 2009 Annual Report published details of reports received by NCMEC in 2008. The total number received that year was 61,055, an 84% increase on the previous year. These reports referenced 700,939 individual still pictures and videos, up from 609,206 in 2007.
As more and more companies started to deploy PhotoDNA or similar technologies the number of reports to NCMEC started to rise sharply. That growth was therefore almost certainly attributable both to the improved methods of detection as well as to the wider global increase in internet usage and the changing nature of the internet. However, in a very important sense, the reasons for the growth in the number of reports are less important than the fact of it. Every single image counts.
In 2017 the Canadian Centre for Child Protection launched Project Arachnid. They linked a web crawler to a database of images and sent it out to see what it could find. This was a world first.
In trials run for six weeks prior to launch, over 230 million web pages were examined. 5.1 million of these were hosting csam. In NCMEC’s report for 2016, published in 2017, for the whole year they logged 8.2 million reports. I will try to provide online links to some of the sources for these numbers but several of them appear to be behind locked doors at the moment.
Arachnid was not looking in the same places as NCMEC or using the same sources although there would have been overlaps, but even so the sort of scale of material being found on web pages on the open internet, as evidenced by the Canadians, was a profound shock. Arachnid strongly suggested the methods used hitherto across-the-board by businesses and hotlines around the world were falling a long way short of meeting the need.
Jumping forward, in the four-year period 2019-2022 inclusive, NCMEC received a total of approximately 100 million reports of csam. Roughly 80 million of these reports came from Meta. In excess of 80% were linked to Messenger and Instagram Direct.
Btw, of the 100 million, the number of reports which resolved to the UK grew as follows: 2019 74,330; 2020 75,578; 2021 97,727; 2022 316,900. Over 550,000 in total. No wonder the UK’s National Crime Agency expressed great concern about Meta’s new policy, saying Meta was “putting children’s lives in danger”.
In 2023 NCMEC received over 36 million reports referencing 105 million files of suspected csam, up 12% on the previous year. Almost 55 million of the total number of reports received concerned still images. 50 million were video files. Quite a shift from the situation reported in 2009.
Now the numbers are going down
For the reasons given in the associated blog, in 2024 NCMEC received 20.5 million reports. While some of this drop can be accounted for in changes to the way reports are now made, helpfully NCMEC translated the 2024 number using the same methods as the previous year. Thus the “real” or comparable number had gone down from 36 million to 29.2 million. The fall of 7 million is doubly concerning because new Federal laws in the USA had established new categories to which the rules on mandatory reporting apply. If other things had been equal the numbers would once again have been going up. But other things were not equal. They were becoming very unequal.
There is little doubt the decline in the number of reports being made to NCMEC was almost entirely down to a decision taken by one man. Mark Zuckerberg.
As noted in the associated blog, in 2021 Zuckerberg had announced he wanted his company to “pivot to privacy” and part of that meant stopping proactively searching for csam by introducing end-to-end encryption into major areas of its operations. In other words Zuckerberg intentionally blinded himself.
All that said, Meta is a fact of life. We have to deal with them but they should not be garlanded with flowers and sweet words.
And when other companies follow their lead and start using end-to-end encryption on a larger scale on their platforms we will know where to turn and point the finger.
My only hope is, in the UK, Ofcom will use its powers under the Online Safety Act 2023 to force a retreat and, under the Digital Services Act (Articles 34-35), the EU will do the same, as will Australia. If Meta has to revert to the status quo ante in those jurisdictions then why wouldn’t they do so everywhere?