French Guiana's Amazonia region. What happens here affects the climate of the entire world. Photo: Jody Amiet/AFP/Getty
Show Hide image

Martin Rees: The world in 2050 and beyond

In today’s runaway world, we can’t aspire to leave a monument lasting 1,000 years, but it would surely be shameful if we persisted in policies that denied future generations a fair inheritance and left them with a more depleted and more hazardous world.

I’ll start with a flashback to 1902. In that year the young H G Wells gave a celebrated lecture at the Royal Institution in London. He spoke mainly in visionary mode. “Humanity,” he proclaimed, “has come some way, and the distance we have travelled gives us some earnest of the way we have to go. All the past is but the beginning of a beginning; all that the human mind has accomplished is but the dream before the awakening.” His rather purple prose still resonates more than 100 years later – he realised that we humans aren’t the culmination of emergent life.

But Wells wasn’t an optimist. He also highlighted the risk of global disaster: “It is impossible to show why certain things should not utterly destroy and end the human story... and make all our efforts vain... something from space, or pestilence, or some great disease of the atmosphere, some trailing cometary poison, some great emanation of vapour from the interior of the earth, or new animals to prey on us, or some drug or wrecking madness in the mind of man”.

I quote Wells because he reflects the mix of optimism and anxiety – and of speculation and science – which I’ll try to offer in this lecture. Were he writing today, he would have been elated by our expanded vision of life and the cosmos –  but he’d have been even more anxious about the perils we might face. The stakes are indeed getting higher: new science offers huge opportunities but its consequences could jeopardise our survival. Many are concerned that it is ‘running away’ so fast that neither politicians nor the lay public can assimilate or cope with it.

My own expertise is in astronomy and space technology, so you may imagine that I’m kept awake at night by worry about asteroid impacts. Not so. Indeed this is one of the few threats that we can quantify. Every ten million years or so, a body a few kilometres across will hit the earth, causing global catastrophe – there’s a few chances in a million that this is how we’ll die. But there are larger numbers of smaller asteroids that could cause regional or local devastation. A body of say 300m across, if it fell into the Atlantic, would produce huge tsunamis that would devastate the east coast of the US, as well as much of Europe. And still smaller impacts are more frequent. One in Siberia in 1908 released energy equivalent to 5 megatons.

Can we be forewarned of these impacts? The answer is yes. There are plans to survey the million potential earth-crossing asteroids bigger than 50m and track their orbits precisely enough to predict possible impacts. With forewarning of an impact, action could be taken to evacuate the most vulnerable areas. Even better news is that during this century we could develop the technology to protect us. A ‘nudge’, imparted a few years before the threatened impact, would only need to change an asteroid’s velocity by a millimetre per second in order to deflect its path away from the earth.

If you calculate an insurance premium in the usual way, by multiplying probability by consequences, it turns out that it is worth spending a billion dollars a year to reduce asteroid risk

Other natural threats – earthquakes and volcanoes – are less predictable. But there’s one reassuring thing about them, as there is about asteroids: the annual risk they pose isn’t getting bigger. It’s the same for us as it was for the Neanderthals – or indeed for the dinosaurs.

 

Human-induced threats

In contrast, the hazards that are the focus of this talk are those that humans themselves engender – and they now loom far larger. And in discussing them I’m straying far from my ‘comfort zone’ of expertise. So I comment as a ‘citizen scientist’, and as a worried member of the human race. I’ll skate over a range of topics, in the hope of being controversial enough to provoke discussion.

Ten years ago I wrote a book that I entitled Our Final Century? My publisher deleted the question-mark. The American publishers changed the title to Our Final Hour (Americans seek instant gratification).

My theme was this. Earth is 45 million centuries old. But this century is the first when one species – ours – can determine the biosphere’s fate. I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating setbacks. That’s because of unsustainable anthropogenic stresses to ecosystems, because there are more of us (world population is higher) and we’re all more demanding of resources. And – most important of all – because we’re empowered by new technology, which exposes us to novel vulnerabilities.

And we’ve had one lucky escape already.

At any time in the Cold War era –  when armament levels escalated beyond all reason – the superpowers could have stumbled towards Armageddon through muddle and miscalculation. During the Cuba crisis I and my fellow-students participated anxiously in vigils and demonstrations. But we would have been even more scared had we then realised just how close we were to catastrophe. Kennedy was later quoted as having said at one stage that the odds were ‘between one in three and evens’. And only when he was long retired did Robert McNamara state frankly that “[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.” Be that as it may, we were surely at far greater hazard from nuclear catastrophe than from anything nature could do. Indeed the annual risk of thermonuclear destruction during the Cold War was about 10,000 times higher than from asteroid impact.

It is now conventionally asserted that nuclear deterrence worked. In a sense, it did. But that doesn’t mean it was a wise policy. If you play Russian roulette with one or two bullets in the barrel, you are more likely to survive than not, but the stakes would need to be astonishing high –  or the value you place on your life inordinately low –  for this to seem a wise gamble. But we were dragooned into just such a gamble throughout the Cold War era. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most European citizens would have accepted, if they’d been asked to give informed consent. For my part, I would not have chosen to risk a one in three – or even one in six – chance of a disaster that would have killed hundreds of millions and shattered the historic fabric of all our cities, even if the alternative were certain Soviet dominance of Western Europe. And of course the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat especially if a nuclear winter were triggered.

The threat of global annihilation involving tens of thousands of H-bombs is thankfully in abeyance; there is, though, currently more risk that smaller nuclear arsenals might be used in a regional context, or even by terrorists. But we can’t rule out, later in the century, a geopolitical realignment leading to a standoff between new superpowers. So a new generation may face its own “Cuba” – and one that could be handled less well or less luckily than the 1962 crisis was.

Nuclear weapons are based on 20th century science. I’ll return later in my talk to the 21st century sciences- bio, cyber, and AI – and what they might portend.

But before that let’s focus on the potential devastation that could be wrought by human-induced environmental degradation and climate change. These threats are long-term and insidious. They stem from humanity’s ever-heavier collective ‘footprint’, which threatens to  stress our finite planet’s ecology beyond sustainable limits…

There’s nothing new about these concerns. Doom-laden predictions of environmental catastrophe famously came in the 1970s from the Club of Rome, Paul Erlich and other groups. These proved wide of the mark. Unsurprisingly, such memories engender scepticism about the worst-case environmental and climatic projections. But the hazards may merely have been postponed – the pressures are now far higher.

For one thing, the world is more crowded. Fifty years ago, world population was below 3 billion. It now exceeds 7 billion. And by 2050 it’s projected to be between 8.5 and 10 billion, the growth being mainly in Africa and India. We must hope for a demographic transition in those countries whose populations are still rising fast, because the higher the post-2050 population becomes, the greater will be all pressures on resources (especially if the developing world narrows its gap with the developed world in its per capita consumption).

Humans already appropriate around 40 per cent of the world’s biomass and that fraction is growing. The resultant ecological shock could irreversibly impoverish our biosphere. Extinction rates are rising: we’re destroying the book of life before we’ve read it. Biodiversity is a crucial component of human wellbeing. We’re clearly harmed if fish stocks dwindle to extinction; there are plants in the rainforest whose gene pool might be useful to us. But for many environmentalists these ‘instrumental’ – and anthropocentric – arguments aren’t the only compelling ones. For them there are further ethical issues: preserving the richness of our biosphere has value in its own right over and above what it means to us humans.

Pressures on food supplies and on the entire biosphere will be aggravated by climate change. And climate change exemplifies the tension between the science, the public and the politicians. One thing isn’t controversial. The atmospheric CO2 concentration is rising – and this is mainly due to the burning of fossil fuels. Straightforward physics tells us that this build-up will induce a long-term warming trend, superimposed on all the other complicated effects that make climate fluctuate. So far, so good.

But what’s less well understood is how big the effect is. Doubling of CO2 in itself causes just 1.2 degrees warming. But the effect can be amplified by associated changes in water vapour and clouds. We don’t know how important these feedback processes are. The recent fifth report from the IPCC presents a spread of projections. But some things are clear. In particular, if annual CO2 emissions continue to rise unchecked, we risk triggering drastic climate change—leading to the devastating scenarios later in this century portrayed in the recent book by Naomi Oreskes and Erik Conway, and even perhaps the initiation of irreversible melting of the Greenland and Antarctic ice, which would eventually raise sea level by many metres.

Many still hope that we can segue towards a low-carbon future without trauma and disaster. But politician won’t gain much resonance by advocating a bare-bones approach that entails unwelcome lifestyle changes – especially if the benefits are far away and decades into the future. There are however three politically realistic measures that should be pushed. First, all countries could promote measures that actually save money – better energy-efficiency, insulating buildings better and so forth. Second, efforts could focus on the reduction of pollutants, methane and black carbon. These are minor contributors to global warming, but their reduction would (unlike that of CO2) have more manifest local side-benefits – especially in Asia. And third, there should be a step change in research into clean energy – who shouldn’t it be on a scale comparable to medical research?

The climate debate has been marred by too much blurring between the science, the politics and the commercial interests. Those who don’t like the implications of the IPCC projections have rubbished the science rather than calling for better science. But even if the science were clear-cut, there is wide scope for debate on the policy response. Those who apply a standard discount rate (as, for instance, Bjorn Lomberg’s Copenhagen Consensus recommendations do) are in effect writing off what happens beyond 2050. There is indeed little risk of catastrophe within that time-horizon, so unsurprisingly they downplay the priority of addressing climate change. But if you apply a lower discount rate – and in effect, don’t discriminate on grounds of data of birth, and care about those who’ll live into the 22nd century and beyond –  then you may deem it worth making an investment now, to protect those future generations against the worst-case scenario and to prevent triggering really long-term changes like the melting of Greenland’s ice.

So what will actually happen on the climate front? My pessimistic guess is that political efforts to decarbonise energy production won’t gain traction and that the CO2 concentration in the atmosphere will rise at an accelerating rate throughout the next 20 years. But by then we’ll know with far more confidence –  perhaps from advanced computer modelling but also from how much global temperatures have actually risen by then – just how strongly the feedback from water vapour and clouds amplifies the effect of CO2 itself in creating a ‘greenhouse effect’. If the effect is strong and the world’s climate consequently seems on a trajectory into dangerous territory, there may then be a pressure for ‘panic measures’. These would have to involve a ‘plan B’ –  being fatalistic about continuing dependence on fossil fuels but combatting its effects by some form of geoengineering.

The ‘greenhouse warming’ could be counteracted by (for instance) putting reflecting aerosols in the upper atmosphere or even vast sunshades in space. It seems feasible to throw enough material into the stratosphere to change the world’s climate –  indeed what is scary is that this might be within the resources of a single nation, or perhaps even a single corporation. The political problems of such geoengineering may be overwhelming. There could be unintended side- effects. Moreover, the warming would return with a vengeance of the countermeasures were ever discontinued; and other consequences of rising CO2 (especially the deleterious effects of ocean acidification) would be unchecked.

Geoengineering would be an utter political nightmare: not all nations would want to adjust the thermostat the same way. Very elaborate climatic modelling would be needed in order to calculate the regional impacts of any artificial intervention. (It would be a bonanza for lawyers if an individual or a nation could be blamed for bad weather!). Dan Schrag, who’ll be commenting later, is an expert on this topic. But as a non-expert I’d think it prudent to explore geoengineering techniques enough to clarify which options make sense, and perhaps damp down undue optimism about a technical ‘quick fix’ of our climate.

So we’re deep into what Paul Creutzen dubbed the ‘anthropocene’. We’re under long-term threat from anthropogenic global changes to climate and biodiversity – due to rising population, all more demanding of food, energy and other resources. All these issues are widely discussed. What’s depressing is the inaction – for politicians the immediate trumps the long-term; the parochial trumps the global. We need to ask whether nations need to give up more sovereignty to new organisations along the lines of IAEA, WHO, etc.

 

Threats from novel technology

But for the rest of this talk I’ll address a different topic – our vulnerability to powerful technologies – those we depend on today, and those that still seem futuristic, even science fiction. Unlike climate and environment these are still under-discussed.

Those of us with cushioned lives in the developed world fret too much about minor hazards: improbable air crashes, carcinogens in food, low radiation doses, and so forth. But we are less secure than we think. We (and our political masters) don’t worry enough about scenarios that have thankfully not yet happened – events that could arise as unexpectedly as the 2008 financial crisis, but which could cause world-wide disruption and deal shattering blows to our society.

We live in an interconnected world increasingly dependent on elaborate networks: electric-power grids, air traffic control, international finance, just-in-time delivery, globally-dispersed manufacturing and so forth. Unless these globalised networks are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns – real-world analogues of what happened in 2008 to the financial system.  Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumor, and psychic and economic contagion, literally at the speed of light.

The issues impel us to plan internationally. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness. And, by the way, the risk that pandemics could cause societal breakdown is far higher than in earlier centuries. English villages in the 14th century continued to function even when the Black Death halved their populations. In contrast, our societies would be vulnerable to breakdown as soon as hospitals overflowed and health services were overwhelmed – which would occur when the fatality rate was still a fraction of one percent. But the human cost would be worst in the shambolic but burgeoning megacities of the developing world

Advances in microbiology offer better prospects of containing such disasters. But the same research has downsides too. For instance, in 2012 researchers at Wisconsin and also at Erasmus University in Holland, showed that it was surprisingly easy to make an influenza virus both virulent and transmissible. When they published they were pressured to redact some details. And the Wisconsin group has been experimenting on H1N1, the virus that let do the catastrophic 1918 epidemic. Last month the US government decided to cease funding and impose a moratorium on so-called ‘gain of function’ experiments. The concern here was partly that it would be aiding terrorists, but partly also that if such experiments weren’t conducted everywhere to the very highest safety and containment standards, there would be a risk of bioerror.

It is hard to make a clandestine H-bomb. In contrast, millions will one day have the capability to misuse biotech, just as they can misuse cybertech today. In the 1970s, in the early days of recombinant DNA research, a group of biologists led by Paul Burg formulated the ‘Asilomar Declaration’, advocating a moratorium on certain types of experiments, and setting up guidelines. In retrospect, this move was perhaps over-cautious, but it seemed an encouraging precedent. However, the research community is far larger, far more broadly international, and far more influenced by commercial pressures.  Whatever regulations are imposed, on prudential or ethical grounds, they could never be enforced worldwide – any more than the drug laws can. Whatever can be done will be done by someone, somewhere.

In consequence, maybe the most intractable challenges to all governments will stem from the rising empowerment of tech-savvy groups (or even individuals), by bio or cyber technology that becomes potentially ever more devastating – to the extent that even one episode could be too many. This will aggravate the tension between freedom, privacy and security.

The results of releasing dangerous pathogens are so incalculable that bioterror isn’t likely to be deployed by extremist groups with well-defined political aims. But such concerns would not give pause to an eco-fanatic, empowered by the bio-hacking expertise that may soon be routine, who believes that ‘Gaia’ is being threatened by the presence of a few billion too many humans. That’s my worst nightmare. (Most devastating would be a potentially fatal virus that was readily transmissible and had a long latency period).

The global village will have its village idiots and they’ll have global range.

 

Looking beyond 2050

These concerns are relatively near-term. Trends beyond 2050 should make us even more anxious. I’ll venture a word about these – but a tentative word, because scientists have a rotten record as forecasters. Ernest Rutherford, the greatest nuclear physicist of his time, said in the 1930s that nuclear energy was ‘moonshine’. One of my predecessors as Astronomer Royal said, as late as the 1950s, that space travel was ‘utter bilge’. My own crystal ball is very cloudy.

In the latter part of the 21st century the world will be warmer and more crowded – that’s one of the few confident predictions.. But we can’t predict how our lives might then have been changed by novel technologies. After all, the speedy societal transformation brought about by the smartphone, the internet and their ancillaries would have seemed magic even 20 years ago. So, looking several decades ahead we must keep our minds open, or at least ajar, to prospects that may now seem science fiction.

The physicist Freeman Dyson foresees a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. I’d guess that this is comfortably beyond the ‘SF fringe’, but were even part of this scenario to come about, our ecology (and even our species) surely would not long survive unscathed.

But what about another fast-advancing technology: robotics and machine intelligence? Even back in the 1990s IBM’s ‘Deep Blue’ beat Kasparov, the world chess champion. More recently ‘Watson’ won a TV game show. Maybe a new-generation  ‘hyper computer’ could achieve oracular powers that offered its controller dominance of international finance and strategy.

Advances in software and sensors have been slower than in number-crunching capacity. Robots still can’t match the facility of a child in recognising and moving the pieces on a real chessboard. They can’t tie your shoelaces or cut your toenails. But machine learning and sensor technology are advancing apace. If robots could  observe and interpret their environment as adeptly as we do they would truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we to other people. And their greater processing speed may give them an advantage over us.

But will robots remain docile rather than ‘going rogue’? And what if a hyper-computer developed a mind of its own? If it could infiltrate the internet – and the internet of things – it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes – or even treat humans as an encumbrance.

Indeed, as early as the 1960s the British mathematician I J Good pointed out that a  super-intelligent robot (were it sufficiently versatile) could be the last invention that humans need ever make. Once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones.

Ray Kurzweil, now working at Google, is the leading evangelist for this so-called ‘singularity’. He thinks that humans could transcend biology by merging with computers, maybe losing their individuality and evolving into a common consciousness. In old-style spiritualist parlance, they would ‘go over to the other side’. But he’s worried that it may not happen in his lifetime. So he wants his body frozen until this nirvana is reached. I was once interviewed by a group of ‘cryonic’ enthusiasts –  in California (where else!)–  called the ‘society for the abolition of involuntary death’. They will freeze your body, so that when immortality’s on offer you can be resurrected. I said I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a ‘deathist’. (I was surprised to find that three Oxford professors were Cryonic enthusiasts. Two had paid the full whack; a third has taken the cut-price option of just having his head frozen).

In regard to all these speculations, we don’t know where the boundary lies between what may happen, and what will remain science fiction –  just as we don’t know whether to take seriously Freeman Dyson’s vision of bio-hacking by children. There are widely divergent views. Some experts, for instance Stuart Russell at Berkeley, and Demis Hassabis of Deep Mind think that the AI field, like synthetic biotech, already needs guidelines for ‘responsible innovation’. But others, like Rodney Brooks, think these concerns are ‘misguided’, and too far from realization to be worth worrying about. And the whole concept is philosophically contentious. – John Searle has an article in a recent NYRB dismissing the entire concept that a machine could have a mind of its own.

Be that as it may, it’s likely that before 2100, our society and its economy will be transformed by autonomous robots, even though these may be ‘idiot savants’ rather than displaying full human capabilities.

[Books like The Second Machine Age have addressed the economic and social disruption that will ensure when robots replace not just factory workers, but white-collar workers as well (even lawyers are under threat!).]

A short digression:

One context where robots surely have a future is in space. In the second part of this century the whole solar system will be explored by flotillas of miniaturized robots. And, on a larger scale, robotic fabricators may build vast lightweight structures floating in space (solar energy collectors, for instance), perhaps mining raw materials from asteroids.

These robotic advances will erode the practical case for human spaceflight. Nonetheless, I hope people will follow the robots, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. For instance SpaceX, led by Elon Musk, who also makes Tesla electric cars, has launched unmanned payloads and docked with the Space Station. He hopes soon to offer orbital flights to paying customers. Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon – voyaging further from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). I’m told they’ve sold a ticket for the second flight but not for the first flight. We should surely cheer on these private enterprise efforts in space – they can tolerate higher risks than a western government could impose on publicly-funded civilians, and thereby cut costs.

By 2100, groups of pioneers may have established ‘bases’ independent from the Earth – on Mars, or maybe on asteroids. Whatever ethical constraints we impose here on the ground, we should surely wish these adventurers good luck in using all the resources of genetic and cyborg technology to adapt themselves and their progeny to alien environments. This might be the first step towards divergence into a new species: the beginning of the post-human era. And it would also ensure that advanced life would survive, even if the worst conceivable catastrophe befell our planet.

But don’t ever expect mass emigration from Earth. Nowhere in our Solar system offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth’s problems.

And here on Earth we may indeed have a bumpy ride through this century. The scenarios I’ve described – environmental degradation, extreme climate change, or unintended consequences of advanced technology –  could trigger serious, even catastrophic, setbacks to our civilization. But they wouldn’t wipe us all out. They’re extreme, but strictly speaking not ‘existential’.

 

Truly existential risks?

Are there conceivable events that could snuff out all life? Promethian concerns of this kind were raised by scientists working on the atomic bomb project during the Second World War. Could we be absolutely sure that a nuclear explosion wouldn’t ignite all the world’s atmosphere or oceans?   Before the Trinity bomb test in New Mexico, Hans Bethe and two colleagues addressed this issue; they convinced themselves that there was a large safety factor. And luckily they were right. We now know for certain that a single nuclear weapon, devastating though it is, can’t trigger a nuclear chain reaction that would utterly destroy the Earth or its atmosphere.

But what about even more extreme experiments? Physicists were (in my view quite rightly) pressured to address the speculative ‘existential risks’ that could be triggered by powerful accelerators in Brookhaven and Geneva that generate unprecedented concentrations of energy.  Could physicists unwittingly convert the entire Earth into particles called ‘strangelets’ – or, even worse, trigger a ‘phase transition’ that would shatter the fabric of space itself?  Fortunately, reassurance could be offered: indeed I was one of those who pointed out that cosmic rays of much higher energies collide o frequently in the Galaxy, but haven’t ripped space apart. And they have penetrated white dwarf and neutron stars without triggering their conversion into ‘strangelets’.

But physicists should surely be circumspect and precautionary about carrying out experiments that generate conditions with no precedent even in the cosmos – just as biologists should avoid release of potentially-devastating genetically-modified pathogens.

So how risk-averse should we be? Some would argue that odds of 10 million to one against an existential disaster would be good enough, because that is below the chance that, within the next year, an asteroid large enough to cause global devastation will hit the Earth. (This is like arguing that the extra carcinogenic effects of artificial radiation are acceptable if it doesn’t so much as double the risk from natural radiation.) But to some, this limit may not seem stringent enough. If there were a threat to the entire Earth, the public might properly demand assurance that the probability is below one in a billion –  even one in a trillion –  before sanctioning such an experiment.

But can we meaningfully give such assurances? We may offer these odds against the Sun not rising tomorrow, or against a fair die giving 100 sixes in a row; that’s because we’re confident that we understand these things. But if our understanding is shaky – as it plainly is at the frontiers of physics –  we can’t really assign a probability, nor confidently assert that something is stupendously unlikely. It’s surely presumptuous to place extreme confidence in any theories about what happens when atoms are smashed together with unprecedented energy. If a congressional committee asked: ‘are you really claiming that there’s less than a one in a billion chance that you’re wrong?’ I’d feel uncomfortable saying yes.

But on the other hand, if a congressman went on to ask: “Could such an experiment disclose a transformative discovery that–  for instance – provided a new source of energy for the world?” I’d again offer high odds against it. The issue is then the relative likelihood of these two unlikely event – one hugely beneficial, the other catastrophic. Innovation is often hazardous, but if we don’t take risks we may forgo disproportionate benefits. Undiluted application of the ‘precautionary principle’ has a manifest downside. There is ‘the hidden cost of saying no’.

And, by the way, the priority that we should assign to avoiding truly existential disasters depends on an ethical question posed by (for instance) the philosopher Derek Parfit, which is this. Consider two scenarios: scenario A wipes out 90 percent of humanity; scenario B wipes out 100 percent. How much worse is B than A? Some would say 10 percent worse: the body count is 10 percent higher. But others would say B was incomparably worse, because human extinction forecloses the existence of billions, even trillions, of future people – and indeed an open-ended post-human future.

Especially if you accept the latter viewpoint, you’ll agree that existential catastrophes deserve more attention. That’s why some of us in (the other) Cambridge – both natural and social scientists –have inaugurated a research programme (the Centre for the Study of Existential Risks) to address these ‘existential’ risks, as well as the wider class of extreme risks I’ve discussed. We need to deploy the best scientific expertise to assess which alleged risks are pure science fiction, and which could conceivably become real; to consider how to enhance resilience against the more credible ones; and to warn against technological developments that could run out of control. . And there are similar efforts elsewhere: at Oxford in the UK here at MIT and in other places.

Moreover, we shouldn’t be complacent that all such probabilities are miniscule. We’ve no grounds for assuming that human-induced threats worse than those on our current risk register are improbable: they are newly emergent, so we have a limited time base for exposure to them and can’t be sanguine that we would survive them for long– nor about the ability of governments to cope if disaster strikes. Indeed we have zero grounds for confidence that we can survive the worst that future technologies could bring in their wake.

Technology bring with it great hopes, but also great fears. We mustn’t forget an important maxim: the unfamiliar is not the same as the improbable.

Another digression:

I’m often asked: is there a special perspective that astronomers can offer to science and philosophy? Having worked among them for many years, I have to tell you that contemplation of vast expanses of space and time doesn’t make astronomers serene and relaxed. They fret about everyday hassles as much as anyone. But they do have one special perspective –  an awareness of an immense future.

The stupendous timespans of the evolutionary past are now part of common culture (outside ‘fundamentalist’ circles, at any rate). But most people still somehow think we humans are the culmination of the evolutionary tree.  That hardly seems credible to an astronomer. Our Sun formed 4.5 billion years ago, but it’s got 6 billion more before the fuel runs out. And the expanding universe will continue –  perhaps forever –  destined to become ever colder, ever emptier. To quote Woody Allen, eternity is very long, especially towards the end.

Posthuman evolution –  here on Earth and far beyond – could  be as prolonged as the Darwinian evolution that’s led to us –  and even more wonderful. Any creatures witnessing the Sun’s demise 6 billion years hence won’t be human –  they’ll be as different from us as we are from a bug. Indeed evolution will be even faster than in the past – on a technological not a natural selection timescale.

Even in this ‘concertinered’ timeline –  extending billions of years into the future, as well as into the past –  this  century may be a defining moment where humans could jeopardise life’s immense potential. That’s why the avoidance of complete extinction has special resonance for an astronomer.

 

Obligations of scientists

Finally, a few thoughts of special relevance to my hosts in STS. Sheila Jasinoff and others have discussed the obligations of scientists when their investigations have potential social, economic and ethical impacts that concern all citizens. These issues are starkly relevant to the theme of this talk. So I’d like, before closing, to offer some thoughts – though with diffidence in front of this audience. It’s important to keep ‘clear water’ between science and policy. Risk assessment should be separate from risk management. Scientists should present policy options based on a consensus of expert opinion; but if they engage in advocacy they should recognise that on the economic, social and ethical aspects of any policy they speak as citizens and not as experts – and will have a variety of views.

I’d highlight some fine exemplars from the past: for instance, the atomic scientists who developed the first nuclear weapons during World War II. Fate had assigned them a pivotal role in history. Many of them – - men such as Jo Rotblat, Hans Bethe, Rudolf Peierls and John Simpson (all of who I was privileged to know in their later years) –  returned with relief to peacetime academic pursuits. But the ivory tower wasn’t, for them, a sanctuary. They continued not just as academics but as engaged citizens – - promoting efforts to control the power they had helped unleash, through national academies, the Pugwash movement, and other bodies.

They were the alchemists of their time, possessors of secret specialized knowledge. The technologies I’ve discussed today have implications just as momentous as nuclear weapons. But in contrast to the ‘atomic scientists’, those engaged with the new challenges  span almost all the sciences, are broadly international – and work in the commercial as well as public sector.

But they all have a responsibility. You would be a poor parent if you didn’t care what happened to your children in adulthood, even though you may have little control over them. Likewise, scientists shouldn’t be indifferent to the fruits of their ideas – their creations.  They should try to foster benign spin-offs – commercial or otherwise. They should resist, so far as they can, dubious or threatening applications of their work, and alert politicians when appropriate. We need to foster a culture of ‘responsible innovation’, especially in fields like biotech, advanced AI and geoengineering.

But, more than that, choices on how technology is applied – what to prioritise, and what to regulate –  require wide public debate, and such debate must be informed and leveraged by ‘scientific citizens’ – who will have a range of political perspectives, They can do this via campaigning groups, via blogging and journalism, or through political activity. There is a role for national academies too.

A special obligation lies on those in academia or self-employed entrepreneurs –  they have more freedom to engage in public debate than those employed in government service or in industry. (Academics have a special privilege to influence students. Polls show, unsurprisingly, that younger people, who expect to survive most of the century, are more engaged and anxious about long-term and global issues – we should respond to their concerns.)

More should be done to assess, and then minimize, the extreme risks I’ve addressed. But though we live under their shadow, there seems no scientific impediment to achieving a sustainable and secure world, where all enjoy a lifestyle better than those in the ‘west’ do today. We can be technological optimists, even though the balance of effort in technology needs redirection – and to be guided by values that science itself can’t provide. But the intractable politics and sociology –  the gap between potentialities and what actually happens –  engenders pessimism. Politicians look to their own voters – and the next election. Stockholders expect a pay-off in the short run. We downplay what’s happening even now in far-away countries. And we discount too heavily the problems we’ll leave for new generations. Without a broader perspective – without realizing that we’re all on this crowded world together – governments won’t properly prioritise projects that are long-term in a political perspectives, even if a mere instant in the history of our planet.

“Space-ship Earth” is hurtling through space. Its passengers are anxious and fractious. Their life-support system is vulnerable to disruption and break-downs. But there is too little planning too little horizon-scanning, too little awareness of long-term risks.

There needs to be a serious research programme, involving natural and social scientists, to compile a more complete register of these ‘extreme risks’, and to enhance resilience against the more credible ones. The stakes are so high that those involved in this effort will have earned their keep even if they reduce the probability of a catastrophe by one in the sixth decimal place.

I’ll close with a reflection on something close to home, Ely Cathedral. This overwhelms us today. But think of its impact  900 years ago –  think of the vast enterprise its construction entailed. Most of its builders had never travelled more than 50 miles. The fens were their world. Even the most educated knew of essentially nothing beyond Europe. They thought the world was a few thousand years old –  and that it might not last another thousand.

But despite these constricted horizons, in both time and space –  despite the deprivation and harshness of their lives –  despite their primitive technology and meagre resources –  they built this huge and glorious building –  pushing the boundaries of what was possible. Those who conceived it knew they wouldn’t live to see it finished. Their legacy still elevates our spirits, nearly a millennium later.

What a contrast to so much of our discourse today! Unlike our forebears, we know a great deal about our world –  and indeed about what lies beyond. Technologies that our ancestors couldn’t have conceived enrich our lives and our understanding. Many phenomena still make us fearful, but the advance of science spares us from irrational dread.  We know that we are stewards of a precious ‘pale blue dot’ in a vast cosmos –  a planet with a future measured in billions of years –  whose fate depends on humanity’s collective actions this century.

But all too often the focus is short term and parochial. We downplay what’s happening even now in impoverished far-away countries. And we give too little thought to what kind of world we’ll leave for our grandchildren.

In today’s runaway world, we can’t aspire to leave a monument lasting a thousand years, but it would surely be shameful if we persisted in policies that denied future generations a fair inheritance and left them with a more depleted and more hazardous world. Wise choices will require the idealistic and effective efforts of natural scientists, environmentalists, social scientists and humanists – all guided by the knowledge that 21st century science can offer. And by values that science alone can’t provide.

But we mustn’t leap from denial to despair. So, having started with H G Wells, I give the final word to another secular sage, the great immunologist Peter Medawar.

“The bells that toll for mankind are like the bells of Alpine cattle. They are attached to our own necks, and it must be our fault if they do not make a tuneful and melodious sound.”

Martin Rees is a Fellow of Trinity College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge. He is also the chair of the Longitude Prize committee, a £10m reward for helping to combat antibiotic resistance, which is now open for submissions. Details here: longitudeprize.org. A version of this lecture was first delivered at the Harvard School of Government on 6 Nov 2014.

Picture: David Parkin
Show Hide image

The humbling of Theresa May

The Prime Minister has lost all authority. The Tories will remove her as soon as they feel the time is right.

Being politicians of unsentimental, ruthless realism, the Conservatives did not linger in the grief stage of their collective disaster after the general election. Disbelief, too, was commendably brief.

Currently, their priority is to impose some sort of order on themselves. This is the necessary prelude to the wholesale change that most see as the next phase in their attempt at recovery, which they all know is essential to their career prospects – and believe is vital to a country whose alternative prime minister is Jeremy Corbyn.

For that reason, talk of Theresa May enduring as Prime Minister until the end of the Brexit negotiations in two years’ time is the preserve of just a few wishful thinkers. Some sort of calm is being established but the party is far from settled or united; there is a widespread conviction that it cannot be so under the present leader.

Elements of the great change have been executed, as Nick Timothy and Fiona Hill, May’s former advisers, will testify.

However, this is only beginning, as shown by the debate in the media about how long May can survive in Downing Street. There is dissatisfaction about elements of her recent reshuffle, but it is quieted because few believe that some of the more contentious appointments or reappointments will last more than a matter of months. Her colleagues are also alarmed by the meal she has made of doing what was supposed to be a straightforward deal with the DUP.

The climate in the party at the moment is one in which everything – jobs, policies and, of course, the leadership – will soon be up for grabs. Debate over “hard” and “soft” Brexits is illusory: anyone who wants to be Conservative leader will need to respect the view of the party in the country, which is that Britain must leave the single market and the customs union to regain control of trade policy and borders. That is one reason why the prospects of David Davis, the Brexit Secretary, are being talked up.

Some of May’s MPs, for all their hard-mindedness about the future, speak of feeling “poleaxed” since the general election. Even before the result changed everything, there was dismay about the bad national campaign; but that, it was felt, could be discussed in a leisurely post-mortem.

Now, instead, it has undermined faith in May’s leadership and credibility. “The social care disaster was key to our defeat,” an MP told me. “It wasn’t just that the policy damaged our core vote, it was the amateurishness of the U-turn.” A more seasoned colleague noted that “it was the first election I’ve fought where we succeeded in pissing off every section of our core vote”.

The limited ministerial reshuffle was inevitable given May’s lack of authority, and summed up her untenability beyond the short term. Most of her few important changes were deeply ill judged: notably the sacking of the skills and apprenticeships minister Robert Halfon, the MP for Harlow in Essex, and a rare Tory with a direct line to the working class; and the Brexit minister David Jones, whose job had hardly begun and whose boss, Davis, was not consulted.

George Bridges, another Brexit minister, who resigned, apparently did so because he felt May had undermined the government’s position in the negotiations so badly, by failing to win the election comprehensively, that he could not face going on.

Much has been made of how Philip Hammond, the Chancellor, was marginalised and briefed against, yet reappointed. Patrick McLoughlin, the party chairman, suffered similarly. Conservative Central Office was largely shut out from the catastrophic campaign, though no one got round to briefing against McLoughlin, who kept his head down – unheard-of conduct by a party chairman in an election.

As a political force, Central Office is for now more or less impotent. It has lost the knack of arguing the case for Conservatism. MPs are increasingly worried that their party is so introspective that it just can’t deal with the way Corbyn is spinning his defeat. “An ugly mood is growing,” one said, “because militant leftism is going unchallenged.” That cannot change until May has gone and the party machine is revived and re-inspired.

***

Nobody in the party wants a general election: but most want a leadership election, and minds are concentrated on how to achieve the latter without precipitating the former. One angry and disillusioned ex-minister told me that “if there were an obvious candidate she’d be shitting herself. But most of us have realised Boris is a wanker, DD isn’t a great communicator and is a bit up himself, Hammond has no charisma, and Amber [Rudd] has a majority of 346.”

On Monday a group of senior ex-ministers met at Westminster to discuss next steps. It was agreed that, with the Brexit talks under way, the most important thing in the interests of restoring order was securing the vote on the Queen’s Speech. Then, May having done her duty and steadied the proverbial ship, the party would manage her dignified and calm evacuation from Downing Street.

Those who agree on this do not always agree on the timing. However, few can make the leap of imagination required to see her addressing the party conference in October, unless to say “Thank you and goodnight” and to initiate a leadership contest. Many would like her out long before then. The only reason they don’t want it this side of securing the Queen’s Speech is that the result, as one put it, would be “chaos”, with a leadership contest resembling “a circular firing squad”.

That metaphor is popular among Tories these days. Others use it to describe the ­apportioning of blame after the election. As well as Timothy and Hill, Lynton Crosby has sustained severe wounds that may prevent the Tories from automatically requesting his services again.

Following the Brexit referendum and Zac Goldsmith’s nasty campaign for the London mayoralty, Crosby has acquired the habit of losing. And then there was Ben Gummer, blamed not only for the social care debacle, but also for upsetting fishermen with a vaguely couched fisheries policy. These failings are becoming ancient history – and the future, not the past, is now the urgent matter – yet some Conservatives still seethe about them despite trying to move on.

“I haven’t heard anyone say she should stay – except Damian Green,” a former minister observed, referring to the new First Secretary of State. Green was at Oxford with May and seems to have earned his job because he is one of her rare friends in high politics. He is regarded as sharing her general lack of conviction.

Older activists recall how the party, in 1974, clung loyally to Ted Heath after he lost one election, and even after he lost a second. Now, deference is over. Most Tory activists, appalled by the handling of the campaign, want change. They would, however, like a contest: annoyed at not having been consulted last time, they intend not to be left silent again.

That view is largely reflected at Westminster, though a few MPs believe a coronation wouldn’t be a problem, “as we don’t want a public examination of the entrails for weeks on end when we need to be shown to be running the country effectively”. Most MPs disagree with that, seeing where a coronation got them last time.

With the summer recess coming up, at least the public’s attention would not be on Westminster if the contest took place mostly during that time: hence the feeling that, once the Queen’s Speech is dealt with, May should announce her intention to leave, in order to have a successor in place before the conference season. It is then up to the party to design a timetable that compresses the hustings between the final two candidates into as short a time as compatible with the democratic process, to get the new leader in place swiftly.

Some letters requesting a contest are said to have reached Graham Brady, the chairman of the 1922 Committee of backbenchers. One MP told me with great authority that there were eight; another, with equal certainty, said 12. Forty-eight are needed to trigger the procedure. However, engineering such a contest is not how most Tories would like to proceed. “She has had an international humiliation,” a former cabinet minister said, “and it is transparently ghastly for her. Then came the [Grenfell Tower] fire. There is no sense our rubbing it in. I suspect she knows she has to go. We admire her for staying around and clearing up the mess in a way Cameron didn’t. But she is a stopgap.”

MPs believe, with some justification, that the last thing most voters want is another general election, so caution is paramount. None doubts that the best outcome for all concerned would be for May to leave without being pushed.

Her tin-eared response to the Grenfell disaster shocked colleagues with its amateurishness and disconnection. “I’m sure she’s very upset by Grenfell,” someone who has known her since Oxford said. “But she is incapable of showing empathy. She has no bridge to the rest of the world other than Philip.” Another, referring to the controversial remark that torpedoed Andrea Leadsom’s leadership ambitions last year, said: “You would get shot for saying it, but not having had children hasn’t helped her when it comes to relating to people. Leadsom was right.”

***

May was quicker off the mark on Monday, issuing a statement condemning the appalling attack at Finsbury Park Mosque swiftly after it occurred, and going there shortly afterwards to meet community leaders. No one could fault her assurance that Muslims must enjoy the same protection under the law as everyone else, or the speed and sincerity with which it was made. She is learning what leadership entails, but too late.

Her administration has become unlucky. This happened to John Major, but, as in his case, the bad luck is partly down to bad decisions; and the bad luck that comes out of the blue simply piles in on top of everything else. Grenfell Tower, lethal and heartbreaking for its victims and their families, was merely more bad luck for the Prime Minister because of her slow-witted response and failure – presumably because shorn of her closest advisers – to do the right thing, and to do it quickly.

But then it turned out that her new chief of staff, Gavin Barwell, had in his previous incarnation as a housing minister received a report on improving fire safety in tower blocks and done nothing about it. That is either more bad luck, or it shows May has dismal judgement in the quality of people she appoints to her close circle. Form suggests the latter.

The idea aired last weekend, that May had “ten days to prove herself”, was a minority view. For most of her colleagues it is too late. It was typical of Boris Johnson’s dwindling band of cheerleaders that they should broadcast a story supporting Davis as an “interim” leader: “interim” until Johnson’s credibility has recovered sufficiently for him to have another pop at the job he covets so much.

They also sought to create the impression that Davis is on manoeuvres, which he resolutely is not. Davis has been around long enough to know that if he wants to succeed May – and his friends believe he does – he cannot be seen to do anything to destabilise her further. It is a lesson lost on Johnson’s camp, whose tactics have damaged their man even more than he was already.

Andrew Mitchell, the former international development secretary and a close ally of Davis, told the Guardian: “. . . it is simply untrue that he is doing anything other
than focusing on his incredibly important brief and giving loyal support to the Prime Minister. Anyone suggesting otherwise is freelancing.” That summed up the contempt Davis’s camp has for Johnson, and it will last long beyond any leadership race.

There is a sense that, in the present febrile climate, whoever is the next leader must be highly experienced. Davis qualifies; so does Hammond, who before his present job was foreign secretary and defence secretary, and who has belatedly displayed a mind of his own since May was hobbled. Hugo Swire, a minister of state under Hammond in the Foreign Office, said of him: “He’s got bottom. He was very good to work for. He is an homme sérieux. I liked him very much and he would calm things down.”

But, as yet, there is no contest. Calls for calm have prevailed, not least thanks to Graham Brady’s steady stewardship of the 1922 Committee, and his success in convincing the more hot-headed of his colleagues to hold their fire. Yet MPs say the 1922 is not what it was 20 years ago: ministers have become used to taking it less seriously.

However, many MPs expect Brady, at a time of their choosing, to go to Downing Street and deliver the poison pill to Theresa May if she is slow to go. Some who know her fear she might take no notice. If she were to play it that way, her end would be unpleasant. As the old saying goes, there is the easy way, and there is the hard way. Remarkably few of her colleagues want to go the hard way but, like everything else in the Tory party at the moment, that could change.

Simon Heffer is a journalist, author and political commentator, who has worked for long stretches at the Daily Telegraph and the Daily Mail. He has written biographies of Thomas Carlyle, Ralph Vaughan Williams and Enoch Powell, and reviews and writes on politics for the New Statesman

This article first appeared in the 22 June 2017 issue of the New Statesman, The zombie PM

0800 7318496