New Times,
New Thinking.

  1. Ideas
3 August 2023

Longtermism poses a real threat to humanity

William MacAskill’s utopian visions may end in violent upheaval.

By Émile P Torres

I used to be a longtermist. That doesn’t mean what you think it does: longtermism is not the same as long-term thinking. We need more long-term thinking in the world, given that human-caused climate change is predicted to persist for another 10,000 years or so. If new generations arise every 25 years on average, this means that climate change will negatively impact 400 future generations. To put this in perspective, that’s almost twice as many generations as have existed since the dawn of civilisation several millennia ago.

Should we care about these future generations? Of course we should – their suffering counts just as much as ours. The fact that they don’t yet exist doesn’t mean that their lives don’t matter, or matter less than the lives of present-day people. This is why we need long-term thinking – a broadening of our perspective on the world to include not just the eight billion people alive right now but these future victims of climate change, whose lives will be thrown into chaos due to our actions.

Longtermism goes way beyond this. It was developed over the past two decades by philosophers at the University of Oxford including Nick Bostrom, Toby Ord and William MacAskill. Bostrom is one of the leading advocates of the idea that advanced artificial intelligence is going to kill everyone on Earth, and he was the subject of harsh criticism earlier this year after an old email surfaced in which he claimed that “blacks are more stupid than whites”. MacAskill’s past is chequered too, as he was the moral “adviser” of Sam Bankman-Fried, the cryptocurrency billionaire and diehard longtermist charged with perpetrating what American prosecutors have called “one of the biggest financial frauds in American history”. Longtermism is the brainchild of these people – a group of highly privileged white men, based at elite universities, who have come to believe that they know what’s best for humanity as a whole.

Longtermists ask us to imagine the future spanning millions, billions, even trillions of years, during which our descendants have left Earth and colonised other star systems, galaxies and beyond. Though Earth may only remain habitable for another one billion years or so, at which point the sun will grow too luminous for us to survive, the universe itself won’t stomp out the flames of life for an estimated 10100 years – that’s a 1 followed by 100 zeros, an unimaginably long time.

Because of this, the future “human” population could consist of far more generations than 400. In the grand scheme of things, 400 generations are nothing. One estimate is that 1058 people could exist in the universe as a whole, assuming these future people are “digital” rather than “biological”.

Why would they be digital? One reason is that colonising space will almost certainly be impossible for biological beings like us. The conditions of space – DNA-mutating radiation, the absence of gravity, the claustrophobic confines of spacecraft – are hostile, not to mention the amount of time it would take to reach other stars or galaxies. Our nearest galaxy, the Andromeda Galaxy, is about 2.5 million light years away. But travelling even close to the speed of light is totally infeasible. At the fastest current speeds, it would take about 45.6 billion years to reach Andromeda. That’s just not possible for biological beings. But if we were digital, it would be.

In the longtermist view, these aren’t just claims about what could be, they’re claims about what should be. This is why longtermism goes way beyond long-term thinking: it implies that we have a moral duty of sorts to colonise space, plunder the cosmos, create as many digital people as possible and, in doing this, to maximise the total amount of “value” in the universe. It’s why longtermists are obsessed with estimating how many future people there could be: some say that at least 1045 – a 1 followed by 45 zeros – could exist in the Milky Way galaxy alone; others calculate the lower-bound number of 1058 in the entire universe, as noted above.

Give a gift subscription to the New Statesman this Christmas from just £49

Assuming that these future people would have, on balance, “worthwhile” lives, then the fact that they could exist implies they should exist. As MacAskill writes, “we should… hope that future civilisation will be big. If future people will be sufficiently well-off, then a civilisation that is twice as long or twice as large is twice as good.” This immediately yields “a moral case for space settlement”. Perhaps you can see why Elon Musk, the CEO of SpaceX, calls longtermism “a close match for my philosophy”.

Grand visions of humanity’s future are seductive. I grew up in a religious community, where talk of the world’s end was common. The rapture was going to happen any day, I was told, and I have vivid memories of when Bill Clinton was elected in 1992: I was a young child, gripped by feelings of terror, because most adults around me believed that he was the Antichrist.

I was so deep in religion that whenever I was done reading the Book of Revelation, I would always make sure to close the Bible, out of fear that Satan would read the end-times prophecies, see how God plans to defeat him, and then use this information to trick the Almighty and turn the tables.

These thoughts planted a seed of curiosity about our ultimate fate in the universe – in academic language, they got me interested in “eschatology”, or the study of “last things”. By the early 2000s I’d lost my Christian faith, but only a few years later I discovered the longtermist philosophy, which seemed to provide a science-based answer to deep questions like: where are we going? What does our long-term future look like? Are we destined for extinction, or could humanity’s future be marvellous beyond all imagination?

[See also: Could I become a Christian in a year?]

Longtermism offered its own secular version of utopia, too: by reengineering ourselves, you and I might become immortal super-beings, and by spreading throughout the universe we could construct a multi-galactic utopian paradise of radical abundance, indefinite lifespans and endless happiness – a project that some leading longtermists quite literally call “paradise engineering”.

Yet my upbringing also made me realise that utopian thinking has a menacing downside. The belief in utopia has led people to engage in radical and extreme behaviours. If one believes that the means are justified by the ends, and the ends are quite literally a paradise full of infinite value, then what exactly is off the table for realising these ends?

Bostrom, the father of longtermism, has written that we shouldn’t shy away from preemptive violence if necessary to protect our “posthuman” future, and he argued in 2019 that policymakers should seriously consider implementing a global surveillance system to prevent “civilisational devastation”. More recently, Bostrom’s colleague Eliezer Yudkowsky contended that pretty much everyone on Earth should be “allowed to die” if it means that we might still reach “the stars someday”. He also claimed in Time magazine that militaries should engage in targeted strikes against data centres to stop the development of advanced artificial intelligence, even at the risk of triggering a nuclear war.

When I was a longtermist, I didn’t think much about the potential dangers of this ideology. However, the more I studied utopian movements that became violent, the more I was struck by two ingredients at the heart of such movements. The first was – of course – a utopian vision of the future, which believers see as containing infinite, or at least astronomical, amounts of value. The second was a broadly “utilitarian” mode of moral reasoning, which is to say the kind of means-ends reasoning above. The ends can sometimes justify the means, especially when the ends are a magical world full of immortal beings awash in “surpassing bliss and delight”, to quote Bostrom’s 2020 “Letter from Utopia”.

It dawned on me that these very same ingredients lie at the heart of the longtermist ideology. If humanity survives and colonises the universe, there could be 1058 “happy” people in the future. These people would be “posthumans” living in virtual reality worlds where suffering has been abolished, and the limits of possibility would be the virtual skies above them. Longtermists literally talk about “astronomical” amounts of future value.

Longtermism is deeply influenced by utilitarianism, an ethical theory in which the ends justify the means. If lying maximises the amount of “value” in the universe, then you should lie. The same goes for any other action you can think of, including murder. Most utilitarians would rush to declare that in nearly all cases, murder will, in fact, produce worse overall outcomes. Still, there’s nothing inherently wrong with murder, in the utilitarian view, and when one truly believes that huge amounts of value are at stake, perhaps murder really is the best thing to do.

Henry Sidgwick, an influential 19th-century utilitarian, justified British colonialism on utilitarian grounds; the most famous contemporary utilitarian, Peter Singer, argued with a colleague in their 1985 book Should the Baby Live? that infanticide is morally OK in some cases of babies with disabilities. In their words, “we think that some infants with severe disabilities should be killed”. The same utilitarian attitude led the late Derek Parfit, who could be called the grandfather of longtermism because of his influence on the ideology, to remark that “at least something good came out of the Germany victory”, after he watched footage of Hitler dancing a jig to celebrate the French surrender to Germany in June 1940. As one observer wrote about this on Twitter, Parfit is “a cautionary tale on the risks of taking utilitarianism too seriously”.

Longtermism combines this kind of “moral” reasoning with a fantastical sci-fi vision of techno-utopia among the heavens. That’s incredibly dangerous, and the statements from Bostrom, Yudkowsky and other longtermists support the conclusion. This is not an ideology that we should want people in positions of power to accept, or even be sympathetic to. Yet longtermism has become profoundly influential. There is Musk, as mentioned; and a UN Dispatch article reports that “the foreign policy community in general and the United Nations in particular are beginning to embrace longtermism”.

AI researchers such as Timnit Gebru affirm that longtermism is everywhere in Silicon Valley. The current race to create advanced AI by companies like OpenAI and DeepMind is driven in part by the longtermist ideology. Longtermists believe that if we create a “friendly” AI, it will solve all our problems and usher in a utopia, but if the AI is “misaligned”, it will destroy humanity and, in the process, obliterate our “vast and glorious” future, to borrow a phrase from Toby Ord. This belief in the utopian-apocalyptic potential of advanced AI is precisely what Sam Altman, the CEO of OpenAI, points to in agreeing with Yudkowsky that “galaxies are indeed at risk” when it comes to getting AI right.

I have been inside the longtermist movement. I was a true believer. Longtermism was my religion after I gave up Christianity – it checked all the same boxes, except that for longtermists we must rely on ourselves to engineer paradise, rather than supernatural deities. Now it’s clear to me that longtermism offers a deeply impoverished view of our future, and that it could have catastrophic consequences if taken literally by those in power: preemptive violence, mass surveillance, thermonuclear war – all to “protect” and “preserve” our supposed “long-term potential” in the universe, to quote Ord again.

I care about future people, those 400 generations that will suffer as a result of climate change. You should, too. We need robust long-term thinking to deal with these new, transgenerational problems we’re creating. But longtermism is not the answer. If anything, I see the longtermist ideology as one of the most insidious threats facing humanity today.

[See also: Using my faith to tackle the climate crisis]

Content from our partners
Pitching in to support grassroots football
Putting citizen experience at the heart of AI-driven public services
Skills policy and industrial strategies must be joined up

Topics in this article :