What is the future of artificial intelligence?

Google's artificial intelligence machine AlphaGo has had shockingly good results - but how AI should be used remains a difficult question.

Sign Up

Get the New Statesman's Morning Call email.

How did we get brains big enough to create machines with artificial intelligence? Some suggest that it was to help keep track of all the people, and their roles, within our growing social groups. Large, well-integrated and co-ordinated groups improved our chances of survival because they made the division of labour possible.

The alternative explanation is that our brain power is due to needing brains that facilitated problem-solving and invention. Whatever the cause, our evolved problem-solving abilities have thrown a spanner in the works. Google’s artificial intelligence machine AlphaGo upends the evolved social contract. Now we can only hope that the machine will help us understand how to preserve the value of individuals who have no contribution to make.

Until recently, for instance, Lee Sedol’s unique selling point lay in his ability to beat all-comers at the ancient Asian game of Go. Now a team of human beings equipped with AlphaGo, an AI tool, have beaten him. The threat of AI does not lie in our having created the first machines whose workings we can’t explain; they aren’t going to subjugate people. But they are going to leave many without a contribution to offer society.

After the first defeat, Sedol pronounced himself “in shock”. After the second defeat he was “quite speechless”. After the third he confessed he felt “powerless”. If that’s how someone who explicitly prepared to pit himself against an AI feels, imagine how stunned we are going to be when the wider applications render many of us surplus to requirements.

This quiet revolution has already started. You know about Google’s self-driving car. Artificial intelligence is already better than most doctors at interpreting medical scans. It is organising school timetables and finding the optimal delivery schedule for supermarket supplies: getting Easter eggs into the hands of slavering infants involves AI.

You’re not even going to notice the takeover. Next time you’re in a supermarket, give the self-service checkout a hard stare. It’s essentially a static robot. And this robot has human assistants. Those people who turn up when you attempt to buy alcohol are summoned by the machine.

The human assistant is still necessary, but only because the manufacturers and programmers made a decision to limit the robot’s capabilities. They didn’t have to: if we decided we wanted fully autonomous robot checkouts, we could equip them to read iris scans or fingerprints, or simply use face recognition.

And that would require us to sign up and hand over our biometric data. Given a little time to get used to the idea, most of us probably would do, and more jobs will go. That tells us something about why we should start coming to terms with the implications of AlphaGo’s success.

AI is not inherently evil. But our inventive brains have created a situation that confuses our social brains. On the one hand, the tribe’s comfort will be increased by efficient machines. On the other, the tribe will find itself supporting a growing number who no longer make a meaningful contribution.

It’s not clear our big, clever brains can solve the problem. Maybe those who profit from making human roles redundant could pay a “human capital gains” tax: we could charge the innovators for replacing a job and divert the money into social programmes. But how to make Google pay to implement its AI? We may have found the problem AlphaGo can’t solve. 

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article appears in the 17 March 2016 issue of the New Statesman, Spring double issue