Brief memo from “Couldn’t Make It Up Dept.”

Sometimes you have to pinch yourself. And I know I am posting on 1st April but, believe me, this is not a joke.

Another Meta internal document has found its way into the public domain and somehow it fell into the grateful hands of the New York Times as well as others.

The document provides guidance to moderators working for the company. To quote from the New York Times

“Meta, the parent company of Facebook, Instagram, Messenger and WhatsApp, has instructed content moderators for its platforms to “err on the side of an adult” when they are uncertain about the age of a person in a photo or video” ( emphasis added).

Note. It does not say “err on the side of protecting children”.

Meta, in the form of Antigone Davis,  advances two, wholly implausible, or at any rate unacceptable reasons for its policy. One is that if they erroneously flag a child sex abuse image it could have “life changing consequences for users”. Presumably she means for the person or people who posted the image.

Note. Antigone Davis did not refer to the “life changing consequences” for a child whose abuse might go undetected because the image in question went unreported.

Seemingly this policy was first spotted last year when Anirudh Krishna wrote about it in the California Law Review. She says company moderators told her the effect is to “bump up children to young adults”. Quite so.

If you get the time please read the whole article and see, for example, the references to “incentivizing inaction” and “racial and gender bias.”

An apparent concern for law enforcement is the second reason Ms Davis  gives as an explanation of why the company wishes to reduce the number of reports.

She says “If the (law enforcement reporting) system is too filled with things that are not useful.. this creates a real burden.”

Meanwhile a deafening silence….

Ms Davis does not mention the company’s repeated attempts to try to explain away  or minimise the significance of the large number of csam reports coming from Facebook e.g. by suggesting many of them are not distributed with criminal intent but out of disgust, outrage, poor taste, or as part of or to further a campaign to draw attention to the evils of child sex abuse.

Neither does Ms Davis refer to the company’s declared intention to use strong encryption in its messaging Apps.

With strong encryption a large proportion of these reports will just disappear. The child sex abuse won’t stop. The circulation of the images will not decrease – if anything it is likely to increase – the only difference is Meta will no longer be able to see them and therefore will no longer be able to report or moderate them.

The number of  reports going to NCMEC will reduce substantially. Meta’s image gets a wash and brush up at a stroke while at the same time slashing its operational costs and pr and legal liabilities. Result.

Against such a background people will be a little sceptical about the rationalizations Ms Davis seeks to deploy to justify their decision.

Render unto law enforcement that which is law enforcement’s 

If you read the New York Times piece you will see several law enforcment officers strongly disagreeing with Meta’s stance.

Meta should be going in entirely the opposite direction. If there are bottleknecks in law enforcement systems it is a problem law enforcement should solve.

To put that slightly differently, would any self-respecting law enforcement officer tell Meta they were OK about them witholding information about a potential crime against children either at all or in circumstances where the company itself stands to benefit? No. I didn’t think so either.

Meta does not decide who gets investigated or prosecuted

The fact that Meta reports something will only have life changing consequences for whoever posted it if the law enforcement agency that sees the image concludes it appears to be of a child being sexually abused. And they decide to act.

In the New York Times piece Davis refers to a penumbra of doubt about what might happen if Meta, as a company, makes a wrong decision about reporting something.  That’s a new one. But see above. It is hard to take seriously. Looks more like a desperate and belated grab at yet another flimsy straw.

Every single image counts because every single child counts. If Meta has a “strategic focus” (corporate-speak) on reducing the large numbers  of child sex abuse materials stacking up against their name,  then it is not in the least bit surprising that stupid decisions like these will be made. But they are still stupid. And bad.

The dilemma

Once more we are  faced with a dilemma. There is something that feels a little unfair about us all constantly laying into Meta because they are so transparent about what happens on their platform and because they have been so successful as a business in capturing such a  huge share of the market. What about all the platforms that do not publish similar data because they can’t or won’t, or the ones that don’t report to NCMEC?

But when Meta keeps doing stuff like this it is impossible to ignore. Mark Zuckerberg has  built something so large and complex that truly it is out of control, ever susceptible to “black swan” type events. He never wanted Facebook or any of his businesses to be agents of evil.  He only ever intended it to be a force for good. But we all know about the road to hell. The fact that Zuckerberg didn’t mean it butters not a single parsnip.