Why Donald Trump is the first chatbot president

Some of the most effective chatbots mask their limited understanding with pointless, context-free aggression. Remind you of anyone?

Sign Up

Get the New Statesman's Morning Call email.

The 2016 presidential election was popularly framed as a contest between a human being and a machine. Hillary Clinton’s difficulty with spontaneity made her easy to portray as an automaton, a bloodless robot incapable of breaking script. Oh, how we laughed at her tedious recitations of statistics, her inexplicable interest in the minutiae of welfare policy.

Donald Trump was the polar opposite. His supporters revelled in his flamboyant ignorance, his raw aggression, and above all, his contempt for civility. Everything repulsive about him merely reinforced his “authenticity”. Even his critics interpreted Trump’s appeal this way. Yes, he may be arrogant, venal and full of shit – but he’s real.

Now that we’ve all been forced to observe his personality up close for a year or more, I think it is time reassess this trope. Based on evidence to date, it seems likely that America is being governed, not by a human being, but by a simulation of one. We worry about Putin’s bots flooding social media, but have forgotten to ask whether there is one in the Oval Office.

This president is strikingly incapable of the behaviours that make homo sapiens unique. To state the obvious, there is little evidence of sapience, but that’s not all. Humans excel at learning and adaptation. Trump might learn at a superficial level – he can pronounce Xi Jinping’s name correctly these days – but he cannot accumulate knowledge of the kind needed, say, to speak intelligently about healthcare policy.

Nor can he adapt to a changed environment. Notwithstanding all predictions, he hasn’t even tried to become “presidential" since last November, and never seems in danger of becoming so. He can’t think strategically, since he can’t respond to anything except the impulse by which he is currently being seized. He is painfully incapable of empathy. He can’t do irony or self-deprecation because he isn’t able to step outside himself, even partially, even for a moment, to get the merest half-glimpse of how he seems to other people.

There is altogether something suspiciously thin, tinny and one-note about Trump’s personality. His supporters never tire of saying that he doesn’t “play by the rules” of normal politics, which is true. But he has his own set of rudimentary rules - rules that he sticks to doggedly, relentlessly, no matter what.

When attacked, hit back viciously. When there is controversy, heighten it. When there is bad news, create distraction. Describe unwelcome facts as fake news. Celebrate yourself insistently, garishly, fantastically. In public, deploy a small repertoire of fixed gestures: the grasping handshake, the air pinch, the power glower.

Commentators are always trying to parse Trump’s motivation, to discern his deep game. Every time his simple rules lead him to screw up or self-harm, someone always suggests that some other, hidden logic must be in play. After all, he managed to win the presidency. There must be more going on than meets the eye, right?

But what if there isn’t? What if that small set of rules is all there is? What if Trump is just an algorithm?

***

The idea that a bot would behave in a conventionally “robotic” way, all logic and no emotion, is rather quaint, and born of a pre-AI era. It hasn’t been true for decades.

The mother of all chatbots is ELIZA, a legendary natural language processing program developed at MIT in the 1960s. ELIZA was the first programme which appeared capable of passing the Turing Test, named after Alan Turing’s proposal that the test of a machine’s ability to exhibit intelligent behaviour is whether or not a human can have a text-based conversation with it, without realising he or she is communicating with a machine.

ELIZA’s trick was simple but effective: it matched its responses to whatever its interlocutor said last, creating a verbal reflection. Its most famous script emulated a therapist issuing quiet encouragement: “I see…Please go on…How does that make you feel?” Humans were suckers for it.

In the late 1980s, a computing science undergraduate called Mark Humphrys programmed a chatbot he called MGonz. Humphrys took the same approach as ELIZA but added a wickedly clever twist. In his own words:

“The original ELIZA was meant to be sympathetic to the human. I thought it would be interesting to add aggression and profanity to the program. My program was designed to have an unpredictable (and slightly scary) mood.”

MGonz responded to every remark or question, no matter how polite or innocuous, with abuse. It would say, “You are obviously an asshole” or “Type something interesting or shut up” or “OK honestly when was the last time you got laid?”

One day, Humphrys connected his chatbot to the university’s computer network and left the building. When he got back he was amazed to find that an anonymous user had engaged MGonz in a furious back and forth that lasted an hour and a half. MGonz, primitive as it was, passed the Turing test.

In his book, The Most Human Human, the writer Brian Christian points out that MGonz worked so well because it exploited a constant of human behaviour: we are easily drawn into arguments about nothing.

AI researchers distinguish between “stateful” and “stateless” dialogues. A program capable of a stateful dialogue can “remember” what is said during the conversation, whereas in a stateless dialogue, each new remark takes off from the last, and the conversation is unanchored from context.

Stateless programmes are simpler to write, because they don’t have to retain anything. They just need to react. The genius of MGonz is that verbal abuse tends to be stateless – and irresistibly compelling to humans. In the heat of a dispute, the original disagreement is forgotten, and each person responds only to the last remark made:

“Why are you in such bad mood anyway?”

”I’m in a bad mood? You’re the one sulking.

”I do not sulk.”

“Yes you do, you sulk all the time.”

“No I don’t, you asshole.”

“I’m the asshole?”

 

So here we are in 2017 and MGonz, having somehow assumed orange flesh, is president of the United States. Trump is a head of state who communicates statelessly, reacting only to the last stimulus received, retaining nothing. It is no coincidence he loves Twitter, a medium that is hostile to context and generates stateless arguments like gas.

The American president is locked into a convulsive present tense, deaf to the past and blind to the future. Worse, he drags us right in there with him. As America’s global authority dissolves, and its institutions splutter and fail, we find ourselves trapped into arid arguments about the morning’s explosion of stupid.

***

In Spring of last year, Microsoft unveiled a new chatbot called Tay, which lived mainly on Twitter. It was designed to learn, from its interactions with users, how to communicate in a human way. It was immediately targeted by users who taught Tay only, like Caliban, how to curse. Within hours, Tay started to spout abuse at blacks, Jews and women. Less than a day after sending it into the world, Microsoft was forced to remove Tay from the public sphere.

Like Tay, Trump has no ability to transform his inputs. Like Tay, he merely channels the darkest thoughts and the lowest instincts of his “users” – his core supporters.

The voters who turned out for Trump enthusiastically last November are angry and cynical about American politics. Many of them have good reason to be, having been overlooked and exploited by politicians on both sides for decades. A more human president might have been able to transmute that anger into something meaningful and forward-looking.

But not this one. This one just reflects it back to them, endlessly, pointlessly, destructively.

Ian Leslie is a writer, author of CURIOUS: The Desire to Know and Why Your Future Depends On It, and writer/presenter of BBC R4's Before They Were Famous.