Show Hide image

Apocalypse soon: the scientists preparing for the end times

A growing community of scientists, philosophers and tech billionaires believe we need to start thinking seriously about the threat of human extinction.

Illustration: Darrel Rees/Heart

The men were too absorbed in their work to notice my arrival at first. Three walls of the conference room held whiteboards densely filled with algebra and scribbled diagrams. One man jumped up to sketch another graph, and three colleagues crowded around to examine it more closely. Their urgency surprised me, though it probably shouldn’t have. These academics were debating what they believe could be one of the greatest threats to mankind – could superintelligent computers wipe us all out?

I was visiting the Future of Humanity Institute, a research department at Oxford University founded in 2005 to study the “big-picture questions” of human life. One of its main areas of research is existential risk. The physicists, philosophers, biologists, economists, computer scientists and mathematicians of the institute are students of the apocalypse.

Predictions of the end of history are as old as history itself, but the 21st century poses new threats. The development of nuclear weapons marked the first time that we had the technology to end all human life. Since then, advances in synthetic biology and nanotechnology have increased the potential for human beings to do catastrophic harm by accident or through deliberate, criminal intent.

In July this year, long-forgotten vials of smallpox – a virus believed to be “dead” – were discovered at a research centre near Washington, DC. Now imagine some similar incident in the future, but involving an artificially generated killer virus or nanoweapons. Some of these dangers are closer than we might care to imagine. When Syrian hackers sent a message from the Associated Press Twitter account that there had been an attack on the White House, the Standard & Poor’s 500 stock market briefly fell by $136bn. What unthinkable chaos would be unleashed if someone found a way to empty people’s bank accounts?

While previous doomsayers have relied on religion or superstition, the researchers at the Future of Humanity Institute want to apply scientific rigour to understanding apocalyptic outcomes. How likely are they? Can the risks be mitigated? And how should we weigh up the needs of future generations against our own?

The FHI was founded nine years ago by Nick Bostrom, a Swedish philosopher, when he was 32. Bostrom is one of the leading figures in this small but expanding field of study. It was the first organisation of its kind in the UK, and Bostrom is also an adviser on the country’s second: the Centre for the Study of Existential Risk at Cambridge University, which was launched in 2012. There are a few similar research bodies in the US, too: in May, the Future of Life Institute opened in Boston at MIT, joining the Machine Intelligence Research Institute in Berkeley, California.

“We’re getting these more and more powerful technologies that we can use to have more and more wide-ranging impacts on the world and ourselves, and our level of wisdom seems to be growing more slowly. It’s a bit like a child who’s getting their hands on a loaded pistol – they should be playing with rattles or toy soldiers,” Bostrom tells me when we meet in his sunlit office at the FHI, surrounded by yet more whiteboards. “As a species, we’re giving ourselves access to technologies that should really have a higher maturity level. We don’t have an option – we’re going to get these technologies. So we just have to mature more rapidly.”

I’d first met Bostrom in London a month earlier, at the launch of his most recent book, Superintelligence: Paths, Dangers, Strategies. He had arrived late. “Our speaker on superintelligence has got lost,” the chair joked. It was a Thursday lunchtime but the auditorium at the RSA on the Strand was full. I was sitting next to a man in his early twenties with a thick beard who had lent in to ask, “Have you seen him argue that we’re almost certainly brains in vats? It sounds so out-there, but when he does it it’s so cool!” He looked star-struck when Bostrom eventually bounded on stage.

Dressed in a checked shirt, stripy socks and tortoiseshell glasses, Bostrom rushed through his presentation, guided by some incongruously retro-looking PowerPoint slides. The consensus among experts in artificial intelligence (AI) is that they will develop a computer with human-level intelligence in the next 50 years, he said. Once they have succeeded in doing this, it might not take so long to develop machines smarter than we are. This new superintelligence would be extremely powerful, and may be hard to control. It would, for instance, be better at computer programming than human beings, and so could improve its own capabilities faster than scientists could. We may witness a “superintelligence explosion”
as computers begin improving themselves at an alarming rate.

If we handle it well, the development of superintelligence might be one of the best things ever to happen to humanity. These smart machines could tackle all the problems we are too stupid to solve. But it could also go horribly wrong. Computers may not share our values and social understanding. Few human beings set out to harm gorillas deliberately, Bostrom pointed out, but because we are cleverer, society is organised around our needs, not those of gorillas. In a future controlled by ultra-smart machines, we could well be the gorillas.

After the book launch, the publishers invited me to join them and Bostrom for a drink. We drank white wine, and Bostrom asked for green tea. Over a bowl of wasabi peanuts, he mentioned casually that he likes to download lectures and listen to them at three times the normal speed while he exercises. “I have an app that adjusts the pitch so it’s not like a Mickey Mouse voice,” he explained, assuming perhaps that this was the reason for my surprised expression. I sensed that he was quite keen to leave us to our wine. “Nick is the most focused person I know,” a colleague later told me.


Bostrom hated school when he was growing up in Helsingborg, a coastal town in Sweden. Then, when he was 15, he stumbled on the philosophy section of his local library and began reading the great German philosophers: Nietzsche, Schopenhauer, Kant. “I discovered there was this wider life of the mind I had been oblivious to before then, and those big gates flung open and I had this sense of having lost time, because I had lost the first 15 years of my life,” he told me. “I had this sense of urgency that meant I knew I couldn’t lose more years, or it could be too late to amount to anything.”

There was no real “concept” of existential risk when Bostrom was a graduate student in London in the mid-1990s. He completed a PhD in philosophy at the London School of Economics while also studying computational neuroscience and astrophysics at King’s College London. “People were talking about human extinction but not about ways of thinking about permanently destroying our future,” he says.

Yet he began to meet people through the mailing lists of early websites who were also drawn to the ideas that were increasingly preoccupying him. They often called themselves “transhumanists” – referring to an intellectual movement interested in the transformation of the human condition through technology – and they were, he concedes, “mainly crackpots”.

In 2000 he became a lecturer in philosophy at Yale and then, two years later, he returned to the UK as a postdoctoral fellow at the British Academy. He planned to pursue his study of existential risk in tandem with a philosophy teaching post, until he met the futurologist and multibillionaire computer scientist James Martin, who was interested in his work. In 2005 Martin donated $150m to the University of Oxford to set up the Oxford Martin School, and the FHI was one of the first research bodies to receive funding through the school. It can be a challenge to be taken seriously in a field that brushes so close to science fiction. “The danger is it can deter serious researchers from the field for fear of being mistaken or associated with crackpots,” Bostrom explained. A “univer­sity with a more marginal reputation” might have been less confident about funding such radical work.

Nevertheless, the FHI has expanded since then to include 18 full-time research staff, drawn from a wide range of disciplines. I spoke to Daniel Dewey, a 27-year-old who last year left his job as a software engineer at Google to join the FHI as a research fellow studying machine superintelligence. A few of his colleagues at Google had introduced him to research on the emerging risks of AI, and he began reading about the subject in his spare time. He came across a problem related to the safety of AIs that he couldn’t solve, and it became the hook. “I was thinking about it all the time,” Dewey recalled.

One of the concerns expressed by those studying artificial intelligence is that machines – because they lack our cultural, emotional and social intuition – might adopt dangerous methods to pursue their goals. Bostrom sometimes uses the example of a superintelligent paper-clip maker which works out that it could create more paper clips by extracting carbon atoms from human bodies. “You at least want to be able to say, ‘I want you to achieve this simple goal and not do anything else that would have a dramatic impact on the world,’ ” Dewey explained.

Unfortunately, it turns out that it is very difficult to meet this simple request in practice. Dewey emailed Toby Ord, an Australian philosopher who also works at the FHI, who replied that he didn’t know the answer, either, but if Dewey came to Oxford they could discuss it. He did, and he soon decided that he might be able to “make a difference” at the institute. So he quit Google.

Many of the FHI researchers seem motivated by a strong sense of moral purpose. Ord is also a founder of Giving What We Can, an organisation whose members pledge 10 per cent of their income to help tackle poverty. Ord gives more than this: anything he earns above £20,000, he donates to charity. Despite his modest salary, he plans to give away £1m over his lifetime.

Ord lives in Oxford with his wife, a medical doctor who has also signed up to the giving pledge, and baby daughter. “I’m living off the median income in the UK, so I can’t complain,” he told me, but they live frugally. The couple dine out no more than once a month, and he treats himself to one coffee a week. Ord sees a natural connection between this and his work at the FHI. “I was focusing on how to think carefully and apply academic scholarship to how I give in my life and helping others to give, too. So when it comes to existential risk I’m interested in the idea that another way of helping people is to figure out how to help future generations,” he said.

Ord is working on a report for the government’s chief scientific adviser on risk and emerging technology. Most researchers at the FHI and at the Centre for the Study of Existential Risk hope that such analysis will gradually be integrated into national policymaking. But, for now, both institutions are surviving on donations from individual philanthropists.


One of the most generous donors to scientists working in the discipline is Jaan Tallinn, the 42-year-old Estonian computer whizz and co-founder of Skype and Kazaa, a file-sharing program. He estimates that he has donated “a couple of million dollars” to five research groups in the US and three in the UK (the FHI, the CSER and 80,000 Hours, an organisation promoting effective philanthropy, which also has a close interest in existential risk). This year he has given away $800,000 (£480,000).

His involvement in the founding of the CSER came after a chance encounter in a taxi in Copenhagen with Huw Price, professor of philosophy at Cambridge. Tallinn told Price that he thought the chances of him dying in an artificial-intelligence-related disaster were higher than those of him dying of cancer or a heart attack. Tallinn has since said that he was feeling particularly pessimistic that day, but Price was nevertheless intrigued. The computer whizz reminded him of another professional pessimist: Martin Rees, the Astronomer Royal.

In 2003, Rees published Our Final Century, in which he outlined his view that mankind has only a 50/50 chance of surviving to 2100. (In the US the book was published as Our Final Hour – because, Rees likes to joke, “Americans like instant gratification”.) A Ted talk that Rees gave on the same subject in July 2005 has been viewed almost 1.6 million times online. In it, he appears hunched over his lectern, but when he begins to speak, his fluency and energy are electrifying. “If you take 10,000 people at random, 9,999 have something in common: their business and interests lie on or near the earth’s surface. The odd one out is an astronomer and I am one of that strange breed,” he begins.

Studying the distant reaches of the universe has not only given Rees an appreciation of humanity’s precious, fleeting existence – if you imagine Planet Earth’s lifetime as a single year, the 21st century would be a quarter of a second, he says – but also allowed him an insight into the “extreme future”. In six billion years the sun will run out of fuel. “There’s an unthinking tendency to imagine that humans will be there, experiencing the sun’s demise, but any life and intelligence that exists then will be as different from us as we are from bacteria.”

Even when you consider these vast timescales and events that have changed the earth dramatically, such as asteroid impacts and huge volcanic eruptions, something extraordinary has happened in recent decades. Never before have human beings been so able to alter our surroundings – through global warming or nuclear war – or to alter ourselves, as advances in biology and computer science open up possibilities of transforming the way we think and live. So it is understandable that Price immediately saw a connection with Tallinn’s interests.

Price invited him to Cambridge and took Tallinn on a tour of what he describes as the “two birthplaces of existential risk”. First they went for dinner at King’s College, where the pioneering computer scientist Alan Turing was a fellow from 1935 to 1945. Then they went for drinks at the Eagle pub, where in 1953 Francis Crick and James Watson announced that they had cracked the double-helix structure of DNA. When I came to the city to interview Price, he took me on a mini-existential risk tour to echo Tallinn’s, inviting me for a pint of DNA ale one evening with a few CSER researchers. For the second time in several weeks I found myself drinking with people I sensed would rather be in the library.

Tallinn’s first trip to Cambridge was successful. With Price and Rees, he co-founded the CSER, providing the seed funding. The centre already has several high-profile advisers, including the physicist Stephen Hawking, Elon Musk (the multibillionaire behind PayPal and SpaceX) and the ethicist Peter Singer. They are hoping to find funding for a couple of postdoctoral positions in the next year or so. “I see it as part of our role to act as a kind of virus, spreading interest and concern about this issue [existential risk] into other academic disciplines,” Price explained. He hopes that eventually institutions studying potentially dangerous fields will develop a risk-awareness culture. Tallinn is trying to change mindsets in other ways, too. He says he sometimes invests in tech companies as an excuse to “hang around in the kitchen, just so I get a feel of what they are doing and can try and influence the culture”.

I met Price and Tallinn for a coffee in the senior common room at Trinity College. At one point a man ran up to us to slap Tallinn on the back and say: “Hey, Jaan. Remember me? Remember our crazy Caribbean days?”

Tallinn looked confused, and the man seemed to sway from side to side, as if dancing with an imaginary hula girl. It turned out they had met at a conference in 2013 and the dancer was a renowned mathematician (I will spare him any blushes). Price said that when he first arrived at Cambridge Tallinn was a minor celebrity; several dons approached him to thank him for making it easier to speak to their children and grandchildren overseas.

Most of the large donors funding existen­tial risk research work in finance or technology. Neither the CSER nor the FHI publishes details of individual donors, but the Machine Intelligence Research Institute (Miri) in Berkeley does. The three biggest donors to Miri are Peter Thiel of PayPal ($1.38m), the founder of Mt Gox (once the main exchange platform for bitcoins), Jed McCaleb ($631,137) and Tallinn. Tech billionaires, like bankers, are more likely to have money to spare – but are they also more acutely aware of the dangers emerging in their industry? Tallinn speaks of Silicon Valley’s “culture of heroism”. “The traditional way of having a big impact in the world is taking something that the public thinks is big and trying to back it, like space travel or eradicating diseases,” he said. “Because I don’t have nearly enough resources for backing something like that, I’ve taken an area that’s massively underappreciated and not that well understood by the public.”

I wondered, when I spoke to Price and Tallinn, how big a difference they believed their work can make. It would be naive to imagine that one could ever convince scientists to stop working in a specific field – whether artificial intelligence or the mani­pulation of viruses – simply because it is dangerous. The best you could hope for would be a greater awareness of the risks posed by new technologies, and improved safety measures. You might want to control access to technology (just as we try to limit access to the enriched uranium needed to make nuclear bombs), but couldn’t this turn science into an increasingly elite occupation? Besides, it is hard to control access to technology for ever, and we know that in the modern, interconnected world small hacks can have catastrophic effects. So how great an impact can a few dozen passionate researchers make, spread across a handful of organisations?

“I’m not supremely confident it’s going to make a big difference, but I’m very confident it will make a small difference,” Price said. “And, given that we’re dealing with huge potential costs, I think it’s worth making a small difference because it’s like putting on a seat belt: it’s worth making a small effort because there’s a lot at stake.”

Tallinn was more upbeat. “There’s a saying in the community: ‘Shut up and multiply’ – just do the calculations,” he said. “Sometimes I joke when there’s particularly good news in this ecosystem, like when I’ve had a good phone call with someone, that ‘OK, that’s another billion saved’.

“Being born into a moment when the fate of the universe is at stake is a lot of fun.” 

Sophie McBain is an assistant editor of the New Statesman 

Sophie McBain is a freelance writer based in New York. She was previously an assistant editor at the New Statesman.

This article first appeared in the 17 September 2014 issue of the New Statesman, Scotland: What Next?

An artist's version of the Reichstag fire, which Hitler blamed on the communists. CREDIT: DEZAIN UNKIE/ ALAMY
Show Hide image

The art of the big lie: the history of fake news

From the Reichstag fire to Stalin’s show trials, the craft of disinformation is nothing new.

We live, we’re told, in a post-truth era. The internet has hyped up postmodern relativism, and created a kind of gullible cynicism – “nothing is true, and who cares anyway?” But the thing that exploits this mindset is what the Russians call dezinformatsiya. Disinformation – strategic deceit – isn’t new, of course. It has played a part in the battle that has raged between mass democracy and its enemies since at least the First World War.

Letting ordinary people pick governments depends on shared trust in information, and this is vulnerable to attack – not just by politicians who want to manipulate democracy, but by those on the extremes who want to destroy it. In 1924, the first Labour government faced an election. With four days to go, the Daily Mail published a secret letter in which the leading Bolshevik Grigory Zinoviev heralded the government’s treaties with the Soviets as a way to help recruit British workers for Leninism. Labour’s vote actually went up, but the Liberal share collapsed, and the Conservatives returned to power.

We still don’t know exactly who forged the “Zinoviev Letter”, even after exhaustive investigations of British and Soviet intelligence archives in the late 1990s by the then chief historian of the Foreign Office, Gill Bennett. She concluded that the most likely culprits were White Russian anti-Bolsheviks, outraged at Labour’s treaties with Moscow, probably abetted by sympathetic individuals in British intelligence. But whatever the precise provenance, the case demonstrates a principle that has been in use ever since: cultivate your lie from a germ of truth. Zinoviev and the Comintern were actively engaged in trying to stir revolution – in Germany, for example. Those who handled the letter on its journey from the forger’s desk to the front pages – MI6 officers, Foreign Office officials, Fleet Street editors – were all too ready to believe it, because it articulated their fear that mass democracy might open the door to Bolshevism.

Another phantom communist insurrection opened the way to a more ferocious use of disinformation against democracy. On the night of 27 February 1933, Germany’s new part-Nazi coalition was not yet secure in power when news started to hum around Berlin that the Reichstag was on fire. A lone left-wing Dutchman, Marinus van der Lubbe, was caught on the site and said he was solely responsible. But Hitler assumed it was a communist plot, and seized the opportunity to do what he wanted to do anyway: destroy them. The suppression of the communists was successful, but the claim it was based on rapidly collapsed. When the Comintern agent Gyorgy Dimitrov was tried for organising the fire, alongside fellow communists, he mocked the charges against him, which were dismissed for lack of evidence.

Because it involves venturing far from the truth, disinformation can slip from its authors’ control. The Nazis failed to pin blame on the communists – and then the communists pinned blame on the Nazis. Dimitrov’s comrade Willi Münzenberg swiftly organised propaganda suggesting that the fire was too convenient to be Nazi good luck. A “counter-trial” was convened in London; a volume called The Brown Book of the Reichstag Fire and Hitler Terror was rushed into print, mixing real accounts of Nazi persecution of communists – the germ of truth again – with dubious documentary evidence that they had started the fire. Unlike the Nazis’ disinformation, this version stuck, for decades.

Historians such as Richard Evans have argued that both stories about the fire were false, and it really was one man’s doing. But this case demonstrates another disinformation technique still at work today: hide your involvement behind others, as Münzenberg did with the British great and good who campaigned for the Reichstag prisoners. In the Cold War, the real source of disinformation was disguised with the help of front groups, journalistic “agents of influence”, and the trick of planting a fake story in an obscure foreign newspaper, then watching as the news agencies picked it up. (Today, you just wait for retweets.)

In power, the Nazis made much use of a fictitious plot that did, abominably, have traction: The Protocols of the Elders of Zion, a forged text first published in Russia in 1903, claimed to be a record of a secret Jewish conspiracy to take over the world – not least by means of its supposed control of everyone from bankers to revolutionaries. As Richard Evans observes, “If you subject people to a barrage of lies, in the end they’ll begin to think well maybe they’re not all true, but there must be something in it.” In Mein Kampf, Hitler argued that the “big lie” always carries credibility – an approach some see at work not only in the Nazis’ constant promotion of the Protocols but in the pretence that their Kristallnacht pogrom in 1938 was spontaneous. (It is ironic that Hitler coined the “big lie” as part of an attack on the Jews’ supposed talent for falsehood.) Today, the daring of the big lie retains its force: even if no one believes it, it makes smaller untruths less objectionable in comparison. It stuns opponents into silence.

Unlike the Nazis, the Bolshevik leaders were shaped by decades as hunted revolutionaries, dodging the Tsarist secret police, who themselves had had a hand in the confection of the Protocols. They occupied the paranoid world of life underground, governed by deceit and counter-deceit, where any friend could be an informer. By the time they finally won power, disinformation was the Bolsheviks’ natural response to the enemies they saw everywhere. And that instinct endures in Russia even now.

In a competitive field, perhaps the show trial is the Soviet exercise in upending the truth that is most instructive today. These sinister theatricals involved the defendants “confessing” their crimes with great
sincerity and detail, even if the charges were ludicrous. By 1936, Stalin felt emboldened to drag his most senior rivals through this process – starting with Grigory Zinoviev.

The show trial is disinformation at its cruellest: coercing someone falsely to condemn themselves to death, in so convincing a way that the world’s press writes it up as truth. One technique involved was perfected by the main prosecutor, Andrey Vyshinsky, who bombarded the defendants with insults such as “scum”, “mad dogs” and “excrement”. Besides intimidating the victim, this helped to distract attention from the absurdity of the charges. Barrages of invective on Twitter are still useful for smearing and silencing enemies.


The show trials were effective partly because they deftly reversed the truth. To conspire to destroy the defendants, Stalin accused them of conspiring to destroy him. He imposed impossible targets on straining Soviet factories; when accidents followed, the managers were forced to confess to “sabotage”. Like Hitler, Stalin made a point of saying the opposite of what he did. In 1936, the first year of the Great Terror, he had a rather liberal new Soviet constitution published. Many in the West chose to believe it. As with the Nazis’ “big lie”, shameless audacity is a disinformation strategy in itself. It must have been hard to accept that any regime could compel such convincing false confessions, or fake an entire constitution.

No one has quite attempted that scale of deceit in the post-truth era, but reversing the truth remains a potent trick. Just think of how Donald Trump countered the accusation that he was spreading “fake news” by making the term his own – turning the charge on his accusers, and even claiming he’d coined it.

Post-truth describes a new abandonment of the very idea of objective truth. But George Orwell was already concerned that this concept was under attack in 1946, helped along by the complacency of dictatorship-friendly Western intellectuals. “What is new in totalitarianism,” he warned in his essay “The Prevention of Literature”, “is that its doctrines are not only unchallengeable but also unstable. They have to be accepted on pain of damnation, but on the other hand they are always liable to be altered on a moment’s notice.”

A few years later, the political theorist Hannah Arendt argued that Nazis and Stalinists, each immersed in their grand conspiratorial fictions, had already reached this point in the 1930s – and that they had exploited a similar sense of alienation and confusion in ordinary people. As she wrote in her 1951 book, The Origins of Totalitarianism: “In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and that nothing was true.” There is a reason that sales of Arendt’s masterwork – and Orwell’s Nineteen Eighty-Four – have spiked since November 2016.

During the Cold War, as the CIA got in on the act, disinformation became less dramatic, more surreptitious. But show trials and forced confessions continued. During the Korean War, the Chinese and North Koreans induced a series of captured US airmen to confess to dropping bacteriological weapons on North Korea. One lamented that he could barely face his family after what he’d done. The pilots were brought before an International Scientific Commission, led by the eminent Cambridge scientist Joseph Needham, which investigated the charges. A documentary film, Oppose Bacteriological Warfare, was made, showing the pilots confessing and Needham’s Commission peering at spiders in the snow. But the story was fake.

The germ warfare hoax was a brilliant exercise in turning democracy’s expectations against it. Scientists’ judgements, campaigning documentary, impassioned confession – if you couldn’t believe all that, what could you believe? For the genius of disinformation is that even exposure doesn’t disable it. All it really has to do is sow doubt and confusion. The story was finally shown to be fraudulent in 1998, through documents transcribed from Soviet archives. The transcripts were authenticated by the historian Kathryn Weathersby, an expert on the archives. But as Dr Weathersby laments, “People come back and say ‘Well, yeah, but, you know, they could have done it, it could have happened.’”

There’s an insidious problem here: the same language is used to express blanket cynicism as empirical scepticism. As Arendt argued, gullibility and cynicism can become one. If opponents of democracy can destroy the very idea of shared, trusted information, they can hope to destabilise democracy itself.

But there is a glimmer of hope here too. The fusion of cynicism and gullibility can also afflict the practitioners of disinformation. The most effective lie involves some self-deception. So the show trial victims seem to have internalised the accusations against them, at least for a while, but so did their tormentors. As the historian Robert Service has written, “Stalin frequently lied to the world when he was simultaneously lying to himself.”

Democracy might be vulnerable because of its reliance on the idea of shared truth – but authoritarianism has a way of undermining itself by getting lost in its own fictions. Disinformation is not only a danger to its targets. 

Phil Tinline’s documentary “Disinformation: A User’s Guide” will be broadcast on BBC Radio 4 at 8pm, 17 March

This article first appeared in the 17 September 2014 issue of the New Statesman, Scotland: What Next?