Show Hide image

What is the future of artificial intelligence?

Google's artificial intelligence machine AlphaGo has had shockingly good results - but how AI should be used remains a difficult question.

How did we get brains big enough to create machines with artificial intelligence? Some suggest that it was to help keep track of all the people, and their roles, within our growing social groups. Large, well-integrated and co-ordinated groups improved our chances of survival because they made the division of labour possible.

The alternative explanation is that our brain power is due to needing brains that facilitated problem-solving and invention. Whatever the cause, our evolved problem-solving abilities have thrown a spanner in the works. Google’s artificial intelligence machine AlphaGo upends the evolved social contract. Now we can only hope that the machine will help us understand how to preserve the value of individuals who have no contribution to make.

Until recently, for instance, Lee Sedol’s unique selling point lay in his ability to beat all-comers at the ancient Asian game of Go. Now a team of human beings equipped with AlphaGo, an AI tool, have beaten him. The threat of AI does not lie in our having created the first machines whose workings we can’t explain; they aren’t going to subjugate people. But they are going to leave many without a contribution to offer society.

After the first defeat, Sedol pronounced himself “in shock”. After the second defeat he was “quite speechless”. After the third he confessed he felt “powerless”. If that’s how someone who explicitly prepared to pit himself against an AI feels, imagine how stunned we are going to be when the wider applications render many of us surplus to requirements.

This quiet revolution has already started. You know about Google’s self-driving car. Artificial intelligence is already better than most doctors at interpreting medical scans. It is organising school timetables and finding the optimal delivery schedule for supermarket supplies: getting Easter eggs into the hands of slavering infants involves AI.

You’re not even going to notice the takeover. Next time you’re in a supermarket, give the self-service checkout a hard stare. It’s essentially a static robot. And this robot has human assistants. Those people who turn up when you attempt to buy alcohol are summoned by the machine.

The human assistant is still necessary, but only because the manufacturers and programmers made a decision to limit the robot’s capabilities. They didn’t have to: if we decided we wanted fully autonomous robot checkouts, we could equip them to read iris scans or fingerprints, or simply use face recognition.

And that would require us to sign up and hand over our biometric data. Given a little time to get used to the idea, most of us probably would do, and more jobs will go. That tells us something about why we should start coming to terms with the implications of AlphaGo’s success.

AI is not inherently evil. But our inventive brains have created a situation that confuses our social brains. On the one hand, the tribe’s comfort will be increased by efficient machines. On the other, the tribe will find itself supporting a growing number who no longer make a meaningful contribution.

It’s not clear our big, clever brains can solve the problem. Maybe those who profit from making human roles redundant could pay a “human capital gains” tax: we could charge the innovators for replacing a job and divert the money into social programmes. But how to make Google pay to implement its AI? We may have found the problem AlphaGo can’t solve. 

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 17 March 2016 issue of the New Statesman, Spring double issue

Show Hide image

Robin Ince: Stephen Hawking made science relatable – why is it still so misunderstood?

We need more science and scientists in popular culture, so that children don’t give up on it as only for “boffins”.

I was 18 when A Brief History of Time was published. I had grown uninterested in science during the latter half of my secondary education, but I bought it anyway. I had fallen into the trap that Schopenhauer warned of, the failure to recognise that buying a book is not the same as reading it or indeed understanding it.

I read a little, then it went on a shelf. I read more of it than the surprise publishing hit of the previous year, Spycatcher. That book remains pristine in the shed, unlike A Brief History of Time, which is now pencil-marked, question-marked and annotated, if not fully understood.

The chuckled aside of “but no one’s actually read it” is really just another version of “what are those boffins on about, eh?”

The problem with popular physics books is that they are unlikely to be easy, especially if the last time you thought about physics was when you were using a bunsen burner as a weapon while distracted from discovering the energy of a peanut in class 3B.

Contemporary physics is counter-instinctual and eager to refute common sense. It takes time. If time exists, obviously.

As thrilling as it can be, you cannot read it at the speed of a thriller because it’s introducing you to a reality that appears so different to your reality.

It is easier to understand the actions of international spies in a Robert Ludlum novel than it is to understand the behaviour of particles and the curvature of space-time because we observe human fear and desire every day, even if we are not a rogue CIA agent.

Good physics books require frequent rest breaks – after all, they may well be turning your universe upside down, inside out or surrounding it with infinite other universes. There is no shame in being flummoxed by quantum indeterminacy and spending a while in a cool, dark room as you contemplate.

Carl Sagan, who wrote the original introduction to A Brief History of Time, wrote that children were born scientists, but they had it beaten out of them.

We are all curious, but with adulthood, our fear of embarrassment grows, and we temper our curiosity. Some close it down all together and embrace dogma and tribalism. At the time of birth, we all have potential to be scientists. Then culture, encouragement or lack of it, and expectations shape what we become. We do not have to give up on it; we just have to find the way in.

The connection with Stephen Hawking for many began with the peculiarity of his story. Here was a man who was physically immobile while his mind traversed the universe. Before you even tried to approach his science, there was a story.

People need stories to engage, facts are not enough.

Visiting schools during Science Week, I hear the frustration from teachers that they do not have time to tell the stories of science, just the information that came from them. They have to deliver the facts at a speed that reaches the target required for the next assessment. The lessons that show the passions and drives  and intrigue, the stories that can inspire, are a rare possibility. The curriculum needs space to enthuse.

Despite living in a world powered by scientific and technological innovation and in a civilisation whose future will be secured and enhanced by these innovations, mass media still treats the subject of “how the universe and everything in it from tadpoles to supermassive black holes came to be and where it is all going” as a niche subject.

We need more science and scientists in popular culture, more daily coverage so it does not become some otherness created by strange people who are not like us.

Let’s have more scientists with cameos on The Simpsons and Star Trek. Let’s not just have Benedict Cumberbatch on the chat show couch because he’s playing a scientist in a movie – let’s have the scientists on there, too.

It seems a pity to ignore the universe when there is so much of it.

It seems a pity to have a brain that has evolved to be curious, but not feed it questions – even if it does make it hurt sometimes.

Guinness World Records: Science & Stuff is out now.

Robin Ince is a writer and comedian. With Brian Cox, he guest edited the 2012 Christmas double issue of the New Statesman. He's on Twitter as @RobinInce.