A problem of trust. Not tech. Part 6

One argument you hear in relation to the deployment of a given technology, for example  Apple’s solution for detecting csam, is that whatever the initial benign, or at any rate acceptable, intention might have been when it was first developed, companies could be pressured into using it for bad purposes by a Government with evil intent.

That is true. It always has been and likely always will be. But you then hear it said the Apple solution, and presumably other client-side scanning solutions like it which could emerge from other stables, therefore should be abandoned or banned. That is profoundly wrong. Sunlight is the best disinfectant, but there’s more to it than that.

Some companies give in to pressure from Governments. Some don’t

The fact is some firms refuse to operate in certain jurisdictions or are banned from doing so precisely because they will not comply with local regulations or the informal expectations of the regime. Other companies take a different view.

It is not self-evident that altruism, or the lack of it, or a regard, or lack of regard, for human rights or other laws are the only explanations for either kind of decision.

Not all Governments are evil

Nevertheless, we cannot look out at the world believing each and every Government  or collection of Governments such as the EU, is already or could easily become evil. The global standard or the pace of progressive change cannot be determined by the recidivist worst offenders.

In democracies where the Rule of Law is routinely honoured, elected representatives are entitled to take a view about how their country, or how the EU is run. They are entitled, even obliged, to make adjustments which recognise the changes, good and bad, brought about by the transition from an analogue to a digital world. They must never allow inertia or complexity to lock them into a failing past.

Is there any product or service that has not been misused?

It is hard to think of any product or service, digital or otherwise, that has not been misused, perhaps egregiously, for purposes never envisaged or intended by the original developers or inventors. The internet itself is an example par excellence of that.

Once a particular technology has been developed and put“out there” in all likelihood it is already too late to say “cancel it or call it back”. It cannot be uninvented. The bad guys will almost certainly already have it or soon acquire it, or now they know it can be done and Silicon Valley has pointed the way, they will be spurred to create their own version.

So “you shouldn’t use it because it might be misused by others” comes down to this

“We know the bad guys have technology x , or soon will have, and in all probability they will use it to do bad things, but we are nevertheless not going to use technology x to do good things”.

It’s absurd.

I get that some applications are likely to matter more than others but in the case of Apple’s solution or ones like it, against a hypothetical or supposed, even a proven, misuse by others, what is the ethical case for nonetheless refusing to use it to protect badly injured children? Children’s whose rights to privacy and human dignity have been trampled upon.

A technological development expressly put together to protect children is where you want to draw the line and fight? Words fail me. Actually they don’t but this blog could be read before the watershed so I will exercise uncharacteristic restraint.

Do we seriously think the bad guys can be shamed or persuaded into ceasing their  appalling behaviour because of our noble disdain, our self-denying ordinance? Mr Putin? Mr Kim Jong-un, Mr Mafia? Gimme a break.

And please, don’t tell me we need to engage in multi-stakeholder dialogues and consensus building or use the good offices of various highly esteemed international institutions to persuade such people to behave better. Today? After 30-odd years of this same old same old, the state the world is in, and is likely to remain for a little while yet?

No more alibis for inaction

Don’t get me wrong. I am all in favour of standard-setting exercises and norm-building. I am all in favour of developing as much of a consensus as possible. This can be extremely valuable, particularly in nudging Governments or companies standing on the margins to move forward. We should always take time to think things through and try to spot as many as possible side-effects or unintended consequences.

But these processes have been cruelly, intentionally and cynically used and abused. If a company or a Government doesn’t want  to oppose something outright, it might be too embarrassing to do so, or it worries if it does it could be defeated and humiliated,  by kicking it into the long grass they can hope to drag out the status quo for a few more years, for as long as possible in fact.

Thus, what I am not in favour of is elevating such notions above all else and allowing them to become a block for concrete action that can actually change things on the ground, if not immediately for all children everywhere, at least for those children within our reach. We don’t have the right to play along in a diplomatic or face-saving game which delays or deprives children of  the protection they can rightfully expect and which is available now.

Some companies behave badly without any external pressure

But what if pressure from a bad Government wasn’t in play? Suppose a company was doing something they claim is to protect children but it is also, by the same process, extracting commercially valuable data that wouldn’t otherwise be available to them. Maybe that was their principal reason for acting and they used children simply as a cover.

That, after all, is what extreme elements in the privacy lobby constantly allege is going on. I stop short of that but, historically, we know with complete certainty that companies have extracted commercially valuable data from their users in ways which are either straightforwardly illegal in their own or many jurisdictions, or they have done so in ways which are not transparently disclosed in their proper context. They only stopped when they were caught. It’s no longer good enough. We need proactive reassurance and that can only be obtained through improved transparency measures.

We all have an interest in knowing the truth

There is no legitimate privacy interest in protecting contraband or other illegal activity. Even so, I still want to know if the measures which are supposed to be protecting children are working well enough. But I don’t want to get that information from a carefully crafted  (massaged) press release or voluntary report which may conceal as much as it reveals.

I  also want to know that when we are told Apple-like solutions are being deployed only to identify csam and nothing else, that this is a fact. We live in a world of zero trust.

What is unacceptable is for anyone to argue for a delay (see above) in the roll out of such approaches while we sort out the complexities of establishing a workable transparency regime. That makes children pay the price for our past failures, again, and let’s not forget a proximate technology, PhotoDNA, has been deployed since 2009 and was recently expressly upheld for continued use in the EU’s interim derogation. I know of no cases where PhotoDNA has got it wrong or put anyone in legal or reputational jeopardy.

Some of the new data sharing regulations that are coming in the EU may help with the kind of transparency I am calling for, but that is still far from clear. We will probably have to wait for the soon-to-be-published new strategy on combatting child sexual exploitation.

Watch this space.

I regret to say, once again, I will have to write another blog in this series. There is more stuff I need to cover and already this blog is too long. My editor is on holiday, but I will truly try to make the next one the last one in the series.