Meet the man who wants total unemployment for all human beings in the world

Hugh Loebner is offering researchers $100,000 to develop a computer that thinks like a human. But is that really the best use of artificial intelligence?

Take a moment to salute the majesty of human conversation. When we talk to each other, whether it’s about last night’s TV or the wisdom of a military strike on Syria, we are doing something far harder than sending a rocket to the moon. We did the moonshot decades ago but we still can’t make a machine that will hold a decent conversation.
 
On 14 September, researchers will gathered in Derry, Northern Ireland, to demonstrate their latest efforts. If any of them has created a machine that successfully mimics a human, they will leave $100,000 richer.
 
The money is being put up by Hugh Loebner, a New York based philanthropist. His goal, he says, is total unemployment for all human beings throughout the world. He wants robots to do all the work. And the first step towards that is apparently to develop computers that seem human when you chat to them.
 
It’s not a new idea. Alan Turing is credited with the first explicit outline of what is now called the Turing test. A human judge sits down at a computer and has a typed conversation with an entity that responds to whatever the judge types. If that entity is a computer, but the judge thinks it’s a person, the conversational computer program passes the test.
 
At the Derry event, the programs won’t compete directly. Instead, the judges will enter a conversation at two terminals, one of which conveys the thoughts of a human being, the other one being controlled by a program. The judge will decide which seems more human; if it’s the computer, that program goes through to the next round, where the challenges get harder.
 
So far, no one has won the big prize but every year the most convincing program wins a smaller amount. The creator of the last program to be rumbled this year will walk away with 4,000 of Loebner’s dollars.
 
Many people in this research field think the competition is a waste of time. The founder of MIT’s artificial intelligence (AI) laboratory, Marvin Minsky, once offered to pay $100 to anyone who can convince Loebner to withdraw his prize fund. Minsky’s problem is that the Loebner Prize gives AI a bad name. The programs are not convincing for long – steer the conversation the right way and you can unseat them fairly easily (you can see last year’s conversations here). Yet AI is in fact becoming rather useful.
 
Computers may not be able to hold a conversation with human beings, but algorithms that adapt “intelligently” to circumstances are starting to hit the streets: Google’s self-driving cars run on AI. The way phone calls are routed through a network relies on other autonomous, flexible programs. Email spam filters, speech-recognition software, stock-market trades and even some medical diagnoses routinely employ machines that seem to think for themselves.
 
Where the Loebner Prize is most useful is probably in providing a check on our enthusiasm. Researchers have created AI programs designed to look at CCTV footage and decide whether a crime is about to be committed. A rapidly moving limb suggests an assault taking place. Spotting a gait associated with fast running can be interpreted as someone fleeing a crime scene.
 
Similar innovations have been tried on the London Underground – a program looks for “suspicious” patterns of movement which indicate that someone might be preparing a terrorist attack or be about to jump under a train. Once the program has decided there is a risk, it will alert the authorities.
 
Though AI programs remain as flawed as those attempting to hold a conversation, let’s hope we won’t be tempted to cede all our liberties to them.
The development of artificial intelligence is becoming more competitive. Image: Getty

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 16 September 2013 issue of the New Statesman, Syria: The deadly stalemate

John Moore
Show Hide image

The man who created the fake Tube sign explains why he did it

"We need to consider the fact that fake news isn't always fake news at the source," says John Moore.

"I wrote that at 8 o'clock on the evening and before midday the next day it had been read out in the Houses of Parliament."

John Moore, a 44-year-old doctor from Windsor, is describing the whirlwind process by which his social media response to Wednesday's Westminster attack became national news.

Moore used a Tube-sign generator on the evening after the attack to create a sign on a TfL Service Announcement board that read: "All terrorists are politely reminded that THIS IS LONDON and whatever you do to us we will drink tea and jolly well carry on thank you." Within three hours, it had just fifty shares. By the morning, it had accumulated 200. Yet by the afternoon, over 30,000 people had shared Moore's post, which was then read aloud on BBC Radio 4 and called a "wonderful tribute" by prime minister Theresa May, who at the time believed it was a genuine Underground sign. 

"I think you have to be very mindful of how powerful the internet is," says Moore, whose viral post was quickly debunked by social media users and then national newspapers such as the Guardian and the Sun. On Thursday, the online world split into two camps: those spreading the word that the sign was "fake news" and urging people not to share it, and those who said that it didn't matter that it was fake - the sentiment was what was important. 

Moore agrees with the latter camp. "I never claimed it was a real tube sign, I never claimed that at all," he says. "In my opinion the only fake news about that sign is that it has been reported as fake news. It was literally just how I was feeling at the time."

Moore was motivated to create and post the sign when he was struck by the "very British response" to the Westminster attack. "There was no sort of knee-jerk Islamaphobia, there was no dramatisation, it was all pretty much, I thought, very calm reporting," he says. "So my initial thought at the time was just a bit of pride in how London had reacted really." Though he saw other, real Tube signs online, he wanted to create his own in order to create a tribute that specifically epitomised the "very London" response. 

Yet though Moore insists he never claimed the sign was real, his caption on the image - which now has 100,800 shares - is arguably misleading. "Quintessentially British..." Moore wrote on his Facebook post, and agrees now that this was ambiguous. "It was meant to relate to the reaction that I saw in London in that day which I just thought was very calm and measured. What the sign was trying to do was capture the spirit I'd seen, so that's what I was actually talking about."

Not only did Moore not mean to mislead, he is actually shocked that anyone thought the sign was real. 

"I'm reasonably digitally savvy and I was extremely shocked that anyone thought it was real," he says, explaining that he thought everyone would be able to spot a fake after a "You ain't no muslim bruv" sign went viral after the Leytonstone Tube attack in 2015. "I thought this is an internet meme that people know isn't true and it's fine to do because this is a digital thing in a digital world."

Yet despite his intentions, Moore's sign has become the centre of debate about whether "nice" fake news is as problematic as that which was notoriously spread during the 2016 United States Presidential elections. Though Moore can understand this perspective, he ultimately feels as though the sentiment behind the sign makes it acceptable. 

"I use the word fake in inverted commas because I think fake implies the intention to deceive and there wasn't [any]... I think if the sentiment is ok then I think it is ok. I think if you were trying to be divisive and you were trying to stir up controversy or influence people's behaviour then perhaps I wouldn't have chosen that forum but I think when you're only expressing your own emotion, I think it's ok.

"The fact that it became so-called fake news was down to other people's interpretation and not down to the actual intention... So in many interesting ways you can see that fake news doesn't even have to originate from the source of the news."

Though Moore was initially "extremely shocked" at the reponse to his post, he says that on reflection he is "pretty proud". 

"I'm glad that other people, even the powers that be, found it an appropriate phrase to use," he says. "I also think social media is often denigrated as a source of evil and bad things in the world, but on occasion I think it can be used for very positive things. I think the vast majority of people who shared my post and liked my post have actually found the phrase and the sentiment useful to them, so I think we have to give social media a fair judgement at times and respect the fact it can be a source for good."

Amelia Tait is a technology and digital culture writer at the New Statesman.