Not long ago Mark Zuckerberg announced Facebook was going to “pivot to privacy”. Apparently the company will apply this idea to WhatsApp, Messenger and Instagram (but not the Facebook platform itself).
This announcement caused all kinds of fluttering in the dovecotes. Among other things it suggested that on three of the world’s largest messaging and communications services it would become impossible for the company to detect, for example, illegal content such as child sex abuse material (csam). Good for criminals, bad for children.
It goes without saying the company is aware of this. In his blog Mr Zuckerberg notes
“Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. We have a responsibility to work with law enforcement and to help prevent these wherever we can. We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can’t see the content of the messages, and we will continue to invest in this work.” (emphasis added).
I have good news for Facebook. It is contained in a blog published last week by (the venerable) Hany Farid, inventor of photoDNA.
Facebook’s decision is important but it is also a lot less than it seems
First of all Farid points out “the majority of the millions of… Facebook reports (of csam) to NCMEC’s CyberTipline originate on Facebook’s Messaging services” so he’s telling us this matters if we care about children. However, he goes on to say
“Even without the ability to read the contents of your messages, Facebook will still know with whom you are communicating, when you are communicating, from where you are communicating, as well as a trove of information about your other online activities.
This is a far cry from real privacy.”
And if Facebook knows all that stuff then, presumably, following a properly warranted request, law enforcement could also know. Let’s leave on one side for now what happens when, for example, quantum computing comes along giving anyone possessing that kind of capability the potential to crack all or most currently known encryption techniques.
Either way Farid’s point is that, in reality, even if Facebook does encrypt the content of the messages, it only represents an incremental step forward in terms of privacy. It is not an order of magnitude change, which is how Facebook’s PR Department was trying to present it. No doubt they had their reasons.
But here’s the answer
Farid acknowledges that the adoption of end-to-end encryption would “significantly hamper” the efficacy of technologies such as photoDNA but he also makes clear that such systems can be adapted to operate within an end-to-end encryption system.
Specifically, when using certain types of encryption algorithms (so-called partially- or fully-homomorphic encryption), it is possible to perform the same type of robust “image hashing” on encrypted data. This means that encrypted images can be analyzed to determine if they are known illicit or harmful material without the need, or even ability, to decrypt the image.”
I’m told it could even be possible to scan material before it is encrypted (see below). Thus, if csam is a principal concern, there is light at the end of the tunnel.
And what are we to make of Apple?
Which brings me to the strange case of Apple.
Andrew Orr is clearly the kind of awkward geeky type who pays attention to the small print. By that I mean he reads the stuff. I’ve never met him but already I feel we could be fast friends.
Apple updated their Ts&Cs on 9th May. Orr found this new set of words under the header
“How we use your personal information.”
“We may also use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material.” (emphasis added).
Orr then made an imaginative leap. He concluded Apple were acknowledging that they were either already using or were planning to use photoDNA or something similar.
Bravo Apple. But wait. In a meeting last week colleagues in the USA met with senior people from Apple. My understanding is that despite being asked directly and repeatedly about Orr’s hypothesis Apple refused to confirm or deny anything.
Great minds think alike (or is it “fools seldom differ“?). On 9th June I had written to Apple’s point guy in the UK asking exactly the same question. Answer came there none.