New Times,
New Thinking.

  1. Culture
  2. Books
4 October 2023

Trapped in the AI echo chamber

Mustafa Suleyman and his fellow artificial intelligence cheerleaders now say their inventions could destroy us. Should we believe them?

By Will Dunn

One book that helps to explain the current debate around artificial intelligence is Julia Donaldson’s The Highway Rat, in which the titular rodent captures a duck, whom he intends to eat, and is outsmarted by her. The duck – spoiler alert – leads the rat to a mountain cave, in which she claims her sister has amassed a hoard of sweet treats. What the rat doesn’t know is that the cave produces a convincing echo. When he yells his demands for chocolates and cake into the mouth of the cave, he is answered by what sounds like another voice, affirming his desires and beckoning him in. Led by his greed, he wanders into the cave and is lost.

This is what ChatGPT and other large language models do: they echo human language in order to tell us what we want to hear. As the British AI entrepreneur Mustafa Suleyman explains in The Coming Wave, these models don’t use words, necessarily, but can break up text into groups of letters and symbols called “tokens”. Give the model enough text (by helping yourself to every book, article, poem, song and conversation uploaded to the internet, for example), and it can make an enormous list of every possible token and the thousands of possible ways (“vectors”) in which that token can relate, across trillions of sentences, to every other token. This allows the model to build up what AI scientists call an “attention map”. For any given prompt, it can use this map to calculate the sequence of tokens that has the highest probability of being accepted as a good output – the most convincing possible echo.

The philosophy of language, from Gottlob Frege’s “On Sense and Reference” (1892) to Saul Kripke’s Naming and Necessity (1980), has sought to define what connects the propositional and empirical worlds. It is in that link – between the thing said and the thing denoted – that meaning, for us, is thought to arise. ChatGPT, on the other hand, processes language only as propositions: “pencil” is a set of letters that are likely to occur near the letters in the word “paper”. There is no empirical referent to which this is linked; output is output, and the only weight a word has is its probability of being accepted.

Like any good echo, the results are uncanny: prompt it to say so, and the model will tell you it is sentient and feels emotions towards you (equally, if prompted, it will tell you it is not sentient and has no capacity for experience). It is even more uncanny now that ChatGPT can conduct conversations using speech. In late September, Lilian Weng one of the executives responsible for “safety” at OpenAI, the company that makes ChatGPT, tweeted that she had had an “emotional, personal conversation” with the software “in voice mode, talking about stress, work-life balance” that made her feel “heard & warm”.

[See also: The human era is ending]

This brings us back to the cave: the duck achieves power over the Highway Rat when the rat believes the echo. It is worth asking if a senior executive at an AI company– someone who knows very well that their software has not been programmed with any capacity for experience or understanding – is sincere when they say they feel “heard” by it. (It is worth noting, too, that this “personal conversation” happened to take place a day after OpenAI announced the capabilities of its new software.) Such pronouncements form part of a narrative, one that is being energetically promoted by the AI industry, that humanity stands on the brink of immense change. It is hard not to view this as part of Silicon Valley’s messianic marketing strategy. Or at least to ask: to what extent do tech gurus themselves believe the echo? Are these people rats, or ducks?  

This question is important because the technology industry is no longer simply casting itself as the source of useful new inventions but as the sector that could decide if humanity survives as a species. In May the CEO of OpenAI, Sam Altman, told the US Senate Judiciary Committee that he believed his company’s products could do “significant harm to the world” unless governments stepped in to work with the industry; in the hearing, AI was compared to nuclear weapons in terms of its potential impact. This fear may be genuine, but it is also financially useful: power, even if destructive, is eminently investible (the market value of OpenAI’s biggest investor, Microsoft, rose by $34bn on the day of the hearing). Moreover, by framing this as an impending crisis to which only they have the solution, AI companies are able to set the terms for their own regulation.

Give a gift subscription to the New Statesman this Christmas from just £49

In The Coming Wave Suleyman goes further, detailing another technology that has apocalyptic potential. There is AI, in which he is an expert, having co-founded the leading AI company DeepMind (which was acquired by Google in 2014) before founding another company, Inflection AI, last year. But there is also synthetic biology, which has since the discovery of the gene-editing properties of bacterial DNA sequences (known as “CRISPR”) offered scientists a growing power to edit the building blocks of life. Not long before Covid-19 was identified, Suleyman was at a seminar in which one attendee speculated that a hobbyist with a small DNA synthesiser (yours for $25,000) could create the next global pandemic. They had no idea that it would arrive – perhaps having been created in a lab – within a few months. For decades people have marvelled at the growth of computing power but largely ignored the concomitant drop in the cost of gene sequencing, which is a million times cheaper than it was 20 years ago.

Suleyman believes humanity is at an inflection point between an age when our most powerful weapons (nuclear bombs) were too expensive for all but the most advanced economies, to one in which any terrorist group or religious cult may be able to harness the dread powers of machine superintelligence and artificial biology. His argument is that advances in technology are impossible to prevent or ignore, but that “containment” of a sort is possible if businesses and governments work together to build a culture of safety and regulation. He cites the aviation industry, which has made an apparently dangerous act (travelling at 500 miles an hour, seven miles in the sky) safer than walking next to a road.

The two strands Suleyman has chosen as the big risks are convincing, because they relate to the founding ideas of postwar technological development. The Second World War was won, at least in part, by two mathematicians, Alan Turing and John von Neumann, who developed the first computers to, respectively, break Nazi cryptography and develop nuclear weapons. As George Dyson wrote in Turing’s Cathedral, his brilliant history of the beginnings of the digital world: “Turing’s question was what it would take for machines to begin to think. Von Neumann’s question was what it would take for machines to begin to reproduce.”

Turing’s answer to his own question was to devise a test – “the imitation game” – in which a computer could be said to be “thinking” if it could fool a human interrogator for at least five minutes of conversation. His 1950 paper “Computing Machinery and Intelligence” contains elegant rebuttals of the logical arguments against accepting that a machine can offer the same linguistic output as a person. But now that his test has been passed – chatbots are demonstrably capable of persuading humans they are sentient – the central question is: does it matter if the algorithm’s only purpose is to pass the test?

There are certainly some frightening things that AI might do in the near future: a deluge of deepfakes might disrupt elections, or AI might allow someone to cause havoc with a virus (digital or biological). But there is also a risk that in overstating AI’s power, we change how we think about ourselves.

[See also: Sam Bankman-Fried and the effective altruism delusion]

There are two ways to read Suleyman’s claim that AI can “ace” the exams passed by lawyers and doctors, for example. One might argue that “passing an exam” is little more than producing a string of tokens that will be accepted as correct input, while “being a doctor” requires a human mind that has evolved to navigate the immeasurable complexity of real life. More popular among the AI community, however, is the idea that humans themselves are simply receiving inputs and generating outputs. Sam Altman, perhaps the most powerful person in AI, espouses this belief: it is not just that machines that are playing the imitation game, but people, too.

That is a dehumanising view, corrosive to conventional ethics. Politicians and lawmakers should listen to the warnings of people such as Mustafa Suleyman, but they should ask if those warnings could serve more than one purpose. The first thing to do is to echo Alan Turing, and ask: is this guy for real?

The Coming Wave
Mustafa Suleyman with Michael Bhaskar
Bodley Head, 352pp, £25

Purchasing a book may earn the NS a commission from Bookshop.org, who support independent bookshops

Content from our partners
Breaking down barriers for the next generation
How to tackle economic inactivity
"Time to bring housebuilding into the 21st century"

This article appears in the 04 Oct 2023 issue of the New Statesman, Labour in Power