New Times,
New Thinking.

  1. Science & Tech
18 March 2016

What is the future of artificial intelligence?

Google's artificial intelligence machine AlphaGo has had shockingly good results - but how AI should be used remains a difficult question.

By Michael Brooks

How did we get brains big enough to create machines with artificial intelligence? Some suggest that it was to help keep track of all the people, and their roles, within our growing social groups. Large, well-integrated and co-ordinated groups improved our chances of survival because they made the division of labour possible.

The alternative explanation is that our brain power is due to needing brains that facilitated problem-solving and invention. Whatever the cause, our evolved problem-solving abilities have thrown a spanner in the works. Google’s artificial intelligence machine AlphaGo upends the evolved social contract. Now we can only hope that the machine will help us understand how to preserve the value of individuals who have no contribution to make.

Until recently, for instance, Lee Sedol’s unique selling point lay in his ability to beat all-comers at the ancient Asian game of Go. Now a team of human beings equipped with AlphaGo, an AI tool, have beaten him. The threat of AI does not lie in our having created the first machines whose workings we can’t explain; they aren’t going to subjugate people. But they are going to leave many without a contribution to offer society.

After the first defeat, Sedol pronounced himself “in shock”. After the second defeat he was “quite speechless”. After the third he confessed he felt “powerless”. If that’s how someone who explicitly prepared to pit himself against an AI feels, imagine how stunned we are going to be when the wider applications render many of us surplus to requirements.

This quiet revolution has already started. You know about Google’s self-driving car. Artificial intelligence is already better than most doctors at interpreting medical scans. It is organising school timetables and finding the optimal delivery schedule for supermarket supplies: getting Easter eggs into the hands of slavering infants involves AI.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

You’re not even going to notice the takeover. Next time you’re in a supermarket, give the self-service checkout a hard stare. It’s essentially a static robot. And this robot has human assistants. Those people who turn up when you attempt to buy alcohol are summoned by the machine.

The human assistant is still necessary, but only because the manufacturers and programmers made a decision to limit the robot’s capabilities. They didn’t have to: if we decided we wanted fully autonomous robot checkouts, we could equip them to read iris scans or fingerprints, or simply use face recognition.

And that would require us to sign up and hand over our biometric data. Given a little time to get used to the idea, most of us probably would do, and more jobs will go. That tells us something about why we should start coming to terms with the implications of AlphaGo’s success.

AI is not inherently evil. But our inventive brains have created a situation that confuses our social brains. On the one hand, the tribe’s comfort will be increased by efficient machines. On the other, the tribe will find itself supporting a growing number who no longer make a meaningful contribution.

It’s not clear our big, clever brains can solve the problem. Maybe those who profit from making human roles redundant could pay a “human capital gains” tax: we could charge the innovators for replacing a job and divert the money into social programmes. But how to make Google pay to implement its AI? We may have found the problem AlphaGo can’t solve. 

Content from our partners
Why Rachel Reeves needs to focus on food in schools
No health, no growth
Tackling cancer waiting times

This article appears in the 05 Apr 2017 issue of the New Statesman, Spring Double Issue