A problem of trust. Not tech. Part 2

Moderation, inspection and AI

Human moderators, analysts and inspectors can be very expensive, especially if you have to employ them in large numbers. However, you cannot moderate, analyze or inspect what you cannot see.

Using end-to-end encryption (E2EE) can therefore make a lot of sense financially for messaging services. E2EE can either completely eliminate such costs or substantially reduce them.

Deploying clever AI is likely to be pointless in respect of strongly encrypted messages as it  will not be able to penetrate the encrypted veil.

That’s another cost-saving or at any rate a cost and complexity problem avoided.

You may need to retain some moderation or inspection capacity in respect of any public facing  or non-encrypted parts of your service. You may also choose to use AI tools to interrogate your customers’ metadata which, typically, are unencrypted (see below) but that’s different.

Reporting mechanisms for the company

In E2EE systems, the need for or the cost of establishing or maintaining user reporting systems will also likely be diminished commensurately.

And btw ” user reporting”?  Who could be against allowing, even encouraging users to report? Not me.

However,  in 2021 only 0.8% of reports to NCMEC  came from members of the public. In other words 99.2% came courtesy of companies almost all of whom were using automated tools. So  sure, have  as much user reporting as you like  but enough of the hype. User reporting is not an alternative to deploying smart tools that work at scale.

External reporting

If the business can’t see anything in their messaging environment it may not need to establish or maintain any kind of mechanism for reporting to the authorities or, if it does, it may only need a smaller set up. That’s another saving, or cost avoided.

Reputational and PR damage avoided

Much that is reported to the authorities e.g. to NCMEC and the police, sooner or later is made public, or at any rate the numbers are. But if no reports or far fewer are made the reputational damage is going to be less, even zero.

Reputation enhanced? Virtue claimed?

If you run a service which has or had high numbers of bad reports linked to it and the numbers fall significantly after you implement E2EE, this might make some people think you did something really good to address the issues that were previously leading to the high numbers of reports.

In reality, introducing E2EE where previously it did not exist is almost guaranteed to lead to the service being used for criminal purposes to a greater, not a lesser extent. But nobody will know for sure so you can blag your way to sainthood.

By the same token, if you were always using strong encryption, so have never seen much to report to NCMEC or the police, some may wrongly conclude your service is safer or better or that you take more care as compared to others. No, no, no.

Legal risks avoided or reduced?

In many jurisdictions an E2EE service may well see their legal liabilities reduce or be completely eliminated as they cannot possibly be liable for something they could not have seen or known anything about.

This may remain true even if the service provider intentionally ensured they couldn’t see or know anything.

But in other jurisdictions either now or soon, wilfully depriving yourself of the ability to detect and curb crime,  in other words wilfully creating conditions which are likely to increase crime may become unlawful, if it isn’t already. Like aiding and abetting or reckless endangerment.

This is why many people argue there should be no further roll out of E2EE unless and until whoever is doing the rolling can prove ( repeat “prove”) they have devised other ways to keep crime in check which at least match what would have been possible if no E2EE were in place.

Making money from the metadata

While on the subject of money, even though E2EE is being used to encase and hide the content of the  message itself, the company can still collect, analyse and commercially exploit the unencrypted metadata associated with and wrapped around the message and transmission package.

And they do. Big time.

More on this in my next blog. Coming soon.

 

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International, Technical Adviser to the European NGO Alliance for Child Safety Online, which is administered by Save the Children Italy and an Advisory Council Member of Beyond Borders (Canada). Amongst other things John is or has been an Adviser to the United Nations, ITU, the European Union, the Council of Europe and European Union Agency for Network and Information Security and is a former Executive Board Member of the UK Council for Child Internet Safety. He is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John has advised many of the world's largest internet companies on online child safety. In June, 2012, John was appointed a Visiting Senior Fellow at the London School of Economics and Political Science. This was renewed in 2018. More: http://johncarrcv.blogspot.com
This entry was posted in Child abuse images, E-commerce, Facebook, Google, Privacy, Regulation, Self-regulation. Bookmark the permalink.