Existential threats?

Artificial Intelligence (AI) has been around for a long time. Some trace its origins to Alan Turing’s work in the 1950s. Yet if you had just emerged from a period of extended hibernation you could be forgiven for thinking AI was invented only last Tuesday.

Scale changes everything

What has actually happened is the tools to create sophisticated AI applications have started being made available pretty much indiscriminately to anyone with internet access. Some very smart individuals with considerable knowledge of these matters, people not generally given to alarmist scaremongering, are starting to worry about the implications, yeah even unto the future of the entire human race. The UK Government has convened a global meeting in Alan Turing’s spiritual home, Bletchley Park, to discuss the issue.

Permissionless innovation

“Permissionless innovation” is said to be a fundamental principle of how the internet works. The current panic is where blind adherence to it can get you.  Anybody can try anything. All they need is the technical knowledge, but money and marketing skills help a lot, particularly if you plan to “go big”. There’s a lot of money and marketing skills kicking around AI.

An App or programme can be halfway around the world before caution and wisdom have got their trousers on. A marvel of the modern age? Unquestionably, but maybe finally it is dawning on more people than ever before that the price for this is too high, and we need to find a way to change the equation. Not easy but then, if it was easy, it would already have been done.

When a relatively small number of companies or institutions had the capacity to develop and deploy AI tools, when its operations were confined to a narrow range of use cases, any discovered problems could be isolated and addressed.  Or at least there was a good chance they could be isolated and addressed. In that sense they were manageable. Containable.

This is why in the new situation, anticipating the mass availability and improved sophistication of AI, a call to reflect on how to deal with its proliferation has resonated across such a broad spectrum of scientific, technical and intellectual opinion.

Politicians engaging early in the cycle? Or too late already?

For all its wonderfulness, it is now widely understood the internet has not been an absolutely unqualified boon. For fear AI might be emerging as another bad bit e.g ending all life on Earth,  politicians are paying attention early in the cycle (let’s hope not too late). And they can do that because unlike during the early, formative years of the internet there are now many more people, civil servants and civil society organizations around who understand things a lot better. They feel less cowed. More self-confident. Indeed there are now also many more politicians around who feel they know enough to engage in ways that they could not have done before.

They may not have scaled the heights of a PhD from MIT, but they know what can happen when things turn out badly and in the end it is how things turn out that matters, not the bamboozling explanations which patronizingly tell you why.

We will all go together when we go

Still, if AI does blow up the planet, we can all die happy knowing nobody had tried to impose limits on the operation of the free market or restrict our “right to innovate”. That should give us some comfort.

As a US Founding Father once said  “Give me death or give me liberty”. Your wish is granted. You got death. I hope you’re happy now.

And who voted for this? Nobody did. Big Tech decided to do it because they could and because they saw how it might make them a buck. They are the biggest drivers. By far.

Some highly motivated, altruistic not-for-profits, private individuals and groups of researchers have also engaged because they have a particular view of the world and are on a mission. Assuming full transparency as to the source of their funds and any important political connections they might have there is nothing wrong in principle with any of that.

A pause for thought?

But couldn’t we have stopped and reflected first? Is technological history bound to keep repeating itself? “Release it Tuesday, fix it Thursday. Maybe”, to paraphrase Professor Ross Anderson of the University of Cambridge, or “Move fast and break things” to quote Mark Zuckerberg verbatim. Breaking the world is probably a tad too far even for him.

Is the gig in Bletchley Park truly going to represent a turning point?

The relevance of the example of AI

The example of AI is relevant because in another digital field exactly the same kind of thing is happening again.  It has enormous implications for the Rule of Law and therefore the future of liberal democracies, not to mention national sovereignty or even the usefulness or relevance of International Treaties and Conventions as well as a great many existing international institutions.

End-to-End Encryption is moving  us towards a major crisis

End-to-End Encryption (E2EE) is standing on the edge of becoming a technology indiscriminately available on a massive scale, in effect to anyone and everyone, without any, or only absolutely minimal formalities. This will not be without consequences but these consequences seem to have been obscured by a highly effective campaign of misdirection and misinformation.

This is the first in a short series of probably long blogs on E2EE. I hope the others follow swiftly. Watch his space.