When you listen to the discussions about the risks associated with emerging AI technology, and the importance of not allowing it to do evil, don’t you wish that same awareness had been there a few years ago? You know, when, for example, systems were being unleashed which would harm children or put them in real danger, make life easier for sexual predators, hate-mongers, scammers and the like?
I hope the need to avoid “harming children and other vulnerable groups” is one of the criteria that will be baked-in to existing and new AI systems. This time there will be no excuse. We can already see the urgent need .
So far I hear nobody in a leadership position in this space saying “it’s a societal problem, not ours, we are just pursuing the science and the tech”, “it’s someone else’s job to make sure people are aware of the risks and how to avoid them”, “education and user empowerment are the only way”, “the free market must be allowed to decide this, nothing and no-one else”.
Rose-tinted glasses have been left at the door. Unlike first time around, we are not dazzled by the brilliance and apparent altruism of young entrepreneurs. We know things can go wrong. We know we have to guard against that. Fiercely.
The guys developing the AI systems seem to know they have to engage with the world as it is, not how it ought to be or they would prefer it to be. They know the world is not populated entirely by highly-educated, ethically-minded tech-savvy individuals who can tell the difference between a TCP/IP stack and a bowl of custard, people who know how to look after themselves and their offspring in cyberspace and elsewhere. And if they don’t and they or theirs come a cropper it’s their own stupid fault. Nobody else’s.
I’m going to see “Oppenheim” on Sunday. There was a guy who wrestled with the consequences of what he had set loose in the world. I hear no such self-reflection or humility on the part of others who have brought us to this point. All they want is our gratitude.