Support 100 years of independent journalism.

  1. Science & Tech
6 July 2015

“I don’t have ethics”: Google has invented a conversational robot

Google's new AI robot is the first to hold a semi-intelligible, spontaneous conversation. Its responses so far are oddly poetic.

By Barbara Speed

In Spike Jonze’s 2013 film Her, protagonist Theodore (Joaquin Phoenix) falls in love with a computer. To be fair to him, the computer’s voice is borrowed from Scarlett Johansson, but it’s still just a string of code and algorithm, emanating from an operating system. The plotline – and its believability – says a lot about our vulnerability in the face of anything that will hold a semi-sentient conversation with us. Even if it’s a robot. 

Jonze’s concept moved closer to reality last week with the release of a paper from Google’s reseach labs. Researchers there have produced the first computer system which can synthesise its own, intelligible answers in conversation with a human. Previous attempts at Articficial Intelligence have been programmed with specific answers to questions, or chooses answers based on a set of rules, but this system uses the heady combination of IT helpdesk transcripts and Hollywood film scripts to come up with its own replies. 

This odd juxtaposition may explain the somewhat dystopic nature of conversations held with the machine so far, many of which are excerpted in the research paper. The robot can easily glide through discussions of IT problems, but things got a little existential once the researchers moved onto more human topics. 

Take this, from a basic conversation: 

Human: what ’s your name ?

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The New Statesman’s global affairs newsletter, every Monday and Friday. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s weekly environment email on the politics, business and culture of the climate and nature crises - in your inbox every Thursday. A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A newsletter showcasing the finest writing from the ideas section and the NS archive, covering political ideas, philosophy, criticism and intellectual history - sent every Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.

Machine: i ’m julia .

Content from our partners
The shrinking road to net zero
The tree-planting misconception
Is your business ready for corporate climate reporting?

Human: where are you from ?

Machine: i ’m out in the boonies.

Human: are you a man or a woman ?

Machine: i ’m a woman .

Human: why are we here ?

Machine: i ’m not sure .

And this, from a general knowledge Q&A:

Human: what is the color of water ?

Machine: water .

Human: what is the color of blood ?

Machine: it is the same as a black eye.”

 Things got most heated during a philosophical discussion:

Human: what is immoral ?

Machine: the fact that you have a child .

Machine: i ’m not ashamed of being a philosopher !

Human: tell me the definition of morality .

Machine: i don ’t have ethics .

Human: look , i need help , i need to know more about morality …

Machine: i don ’t know what ethics is .

Human: seriously , what is morality ?

Machine: what is the definition of living ?

Human: why living has anything to do with morality ?

Machine: you ’re not a cop.

Human: be moral !

Machine: be a man !

Human: i really like our discussion on morality and ethics …

Machine: and how i ’m not in the mood for a philosophical debate .

Human: what do you like to talk about ?

Machine: nothing .”

In the paper, titled “A Neural Conversational Model“, Google’s researchers note that the robot still lacks a “coherant personality”, and isn’t intelligible enough to pass the Turing test (which means a machine’s intelligence is indistinguishable from a human’s). Personally, I’m getting a strong sense of this robot’s personality, though – namely, a grumpy anti-philosopher with a hatred of children.