Talking to my mechanic

Because I want my car to go faster I take it to a garage and ask the mechanic to increase the compression ratio on the hyperflange (can you tell I am making this bit up?). I will be glad to hear what she has to say about whether or not that is a good idea. I want to know if there are any risks associated with this course of action. Is there anything I could do to minimise or completely eliminate the possibility of those risks materialising? My mechanic should tell me. It’s part of her job.

But in the end it is my decision whether or not to proceed and she has a choice. The mechanic can do as I ask or refuse then I can take my car somewhere else, to someone who will do what I want.

If the tools and the knowledge about how to do something are already available, there will always be someone who will do it if you pay them. There isn’t a global closed shop of mechanics who all stick to a rigid set of rules even though many of them might have learned their trade at the feet of the same instructors in garages in the UK, US and other countries around the world, supplemented by experience working in or for a major garage company or garage consultancy.

OK. Enough of the analogy.

Is it so very different in the world of cyber insecurity?

If I ask a techie to build a system which can detect illegal images or actions  likely to be illegal, and to do that before they enter an encrypted space,  and then ask them to integrate that system into their operations, they can say they think it is a bad idea and tell me why they think that. Everyone  is entitled to have and express an opinion. They can point out the risks and also tell me how to minimise the possibility of those risks ever materialising.

But in the end whether or not to proceed is a policy decision. Not a technical one.

Perfection is not an option

There may never have been any software that has ever been developed and “put out there” which does not carry with it a risk that it will be abused or misused. We have a whole industry of cyber-insecurity, anti virus software, firewalls etc which bears witness to that.

I simply do not believe it is impossible to build systems with auditability, transparency and other checks built in which will either completely eliminate or reduce to near zero the possibility of such systems being abused or misused, at least not in countries where the Rule of Law is routinely honoured. Apple came up with such a system and they have never said it is technically flawed. Being cowards they simply decided they wouldn’t implement it, knowing perfectly well this meant children would continue to be put in danger on a substantial scale. Shame on them.

And in those countries where the Rule of Law is not routinely honoured, what’s the plan?  Where are the cyber troops assembling? On which border? How does not protecting children in the UK, Norway and Germany help anyone in North Korea or any other totalitarian state?

And while we are on the subject could someone tell me how many totalitarian states have fallen since the internet came along and how many have been able to use the internet and its associated digital technologies to strengthen their grip on power?

I think the answers are “none” and “all of them”, respectively. So who exactly are our mechanics aiming at? Do they really think all of those countries which routinely  honour the Rule of Law are tottering on the egde of becoming another North Korea but for their  own, selfless, brave efforts to keep their crypto fascist plans at bay?