French Guiana's Amazonia region. What happens here affects the climate of the entire world. Photo: Jody Amiet/AFP/Getty
Show Hide image

Martin Rees: The world in 2050 and beyond

In today’s runaway world, we can’t aspire to leave a monument lasting 1,000 years, but it would surely be shameful if we persisted in policies that denied future generations a fair inheritance and left them with a more depleted and more hazardous world.

I’ll start with a flashback to 1902. In that year the young H G Wells gave a celebrated lecture at the Royal Institution in London. He spoke mainly in visionary mode. “Humanity,” he proclaimed, “has come some way, and the distance we have travelled gives us some earnest of the way we have to go. All the past is but the beginning of a beginning; all that the human mind has accomplished is but the dream before the awakening.” His rather purple prose still resonates more than 100 years later – he realised that we humans aren’t the culmination of emergent life.

But Wells wasn’t an optimist. He also highlighted the risk of global disaster: “It is impossible to show why certain things should not utterly destroy and end the human story... and make all our efforts vain... something from space, or pestilence, or some great disease of the atmosphere, some trailing cometary poison, some great emanation of vapour from the interior of the earth, or new animals to prey on us, or some drug or wrecking madness in the mind of man”.

I quote Wells because he reflects the mix of optimism and anxiety – and of speculation and science – which I’ll try to offer in this lecture. Were he writing today, he would have been elated by our expanded vision of life and the cosmos –  but he’d have been even more anxious about the perils we might face. The stakes are indeed getting higher: new science offers huge opportunities but its consequences could jeopardise our survival. Many are concerned that it is ‘running away’ so fast that neither politicians nor the lay public can assimilate or cope with it.

My own expertise is in astronomy and space technology, so you may imagine that I’m kept awake at night by worry about asteroid impacts. Not so. Indeed this is one of the few threats that we can quantify. Every ten million years or so, a body a few kilometres across will hit the earth, causing global catastrophe – there’s a few chances in a million that this is how we’ll die. But there are larger numbers of smaller asteroids that could cause regional or local devastation. A body of say 300m across, if it fell into the Atlantic, would produce huge tsunamis that would devastate the east coast of the US, as well as much of Europe. And still smaller impacts are more frequent. One in Siberia in 1908 released energy equivalent to 5 megatons.

Can we be forewarned of these impacts? The answer is yes. There are plans to survey the million potential earth-crossing asteroids bigger than 50m and track their orbits precisely enough to predict possible impacts. With forewarning of an impact, action could be taken to evacuate the most vulnerable areas. Even better news is that during this century we could develop the technology to protect us. A ‘nudge’, imparted a few years before the threatened impact, would only need to change an asteroid’s velocity by a millimetre per second in order to deflect its path away from the earth.

If you calculate an insurance premium in the usual way, by multiplying probability by consequences, it turns out that it is worth spending a billion dollars a year to reduce asteroid risk

Other natural threats – earthquakes and volcanoes – are less predictable. But there’s one reassuring thing about them, as there is about asteroids: the annual risk they pose isn’t getting bigger. It’s the same for us as it was for the Neanderthals – or indeed for the dinosaurs.

 

Human-induced threats

In contrast, the hazards that are the focus of this talk are those that humans themselves engender – and they now loom far larger. And in discussing them I’m straying far from my ‘comfort zone’ of expertise. So I comment as a ‘citizen scientist’, and as a worried member of the human race. I’ll skate over a range of topics, in the hope of being controversial enough to provoke discussion.

Ten years ago I wrote a book that I entitled Our Final Century? My publisher deleted the question-mark. The American publishers changed the title to Our Final Hour (Americans seek instant gratification).

My theme was this. Earth is 45 million centuries old. But this century is the first when one species – ours – can determine the biosphere’s fate. I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating setbacks. That’s because of unsustainable anthropogenic stresses to ecosystems, because there are more of us (world population is higher) and we’re all more demanding of resources. And – most important of all – because we’re empowered by new technology, which exposes us to novel vulnerabilities.

And we’ve had one lucky escape already.

At any time in the Cold War era –  when armament levels escalated beyond all reason – the superpowers could have stumbled towards Armageddon through muddle and miscalculation. During the Cuba crisis I and my fellow-students participated anxiously in vigils and demonstrations. But we would have been even more scared had we then realised just how close we were to catastrophe. Kennedy was later quoted as having said at one stage that the odds were ‘between one in three and evens’. And only when he was long retired did Robert McNamara state frankly that “[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.” Be that as it may, we were surely at far greater hazard from nuclear catastrophe than from anything nature could do. Indeed the annual risk of thermonuclear destruction during the Cold War was about 10,000 times higher than from asteroid impact.

It is now conventionally asserted that nuclear deterrence worked. In a sense, it did. But that doesn’t mean it was a wise policy. If you play Russian roulette with one or two bullets in the barrel, you are more likely to survive than not, but the stakes would need to be astonishing high –  or the value you place on your life inordinately low –  for this to seem a wise gamble. But we were dragooned into just such a gamble throughout the Cold War era. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most European citizens would have accepted, if they’d been asked to give informed consent. For my part, I would not have chosen to risk a one in three – or even one in six – chance of a disaster that would have killed hundreds of millions and shattered the historic fabric of all our cities, even if the alternative were certain Soviet dominance of Western Europe. And of course the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat especially if a nuclear winter were triggered.

The threat of global annihilation involving tens of thousands of H-bombs is thankfully in abeyance; there is, though, currently more risk that smaller nuclear arsenals might be used in a regional context, or even by terrorists. But we can’t rule out, later in the century, a geopolitical realignment leading to a standoff between new superpowers. So a new generation may face its own “Cuba” – and one that could be handled less well or less luckily than the 1962 crisis was.

Nuclear weapons are based on 20th century science. I’ll return later in my talk to the 21st century sciences- bio, cyber, and AI – and what they might portend.

But before that let’s focus on the potential devastation that could be wrought by human-induced environmental degradation and climate change. These threats are long-term and insidious. They stem from humanity’s ever-heavier collective ‘footprint’, which threatens to  stress our finite planet’s ecology beyond sustainable limits…

There’s nothing new about these concerns. Doom-laden predictions of environmental catastrophe famously came in the 1970s from the Club of Rome, Paul Erlich and other groups. These proved wide of the mark. Unsurprisingly, such memories engender scepticism about the worst-case environmental and climatic projections. But the hazards may merely have been postponed – the pressures are now far higher.

For one thing, the world is more crowded. Fifty years ago, world population was below 3 billion. It now exceeds 7 billion. And by 2050 it’s projected to be between 8.5 and 10 billion, the growth being mainly in Africa and India. We must hope for a demographic transition in those countries whose populations are still rising fast, because the higher the post-2050 population becomes, the greater will be all pressures on resources (especially if the developing world narrows its gap with the developed world in its per capita consumption).

Humans already appropriate around 40 per cent of the world’s biomass and that fraction is growing. The resultant ecological shock could irreversibly impoverish our biosphere. Extinction rates are rising: we’re destroying the book of life before we’ve read it. Biodiversity is a crucial component of human wellbeing. We’re clearly harmed if fish stocks dwindle to extinction; there are plants in the rainforest whose gene pool might be useful to us. But for many environmentalists these ‘instrumental’ – and anthropocentric – arguments aren’t the only compelling ones. For them there are further ethical issues: preserving the richness of our biosphere has value in its own right over and above what it means to us humans.

Pressures on food supplies and on the entire biosphere will be aggravated by climate change. And climate change exemplifies the tension between the science, the public and the politicians. One thing isn’t controversial. The atmospheric CO2 concentration is rising – and this is mainly due to the burning of fossil fuels. Straightforward physics tells us that this build-up will induce a long-term warming trend, superimposed on all the other complicated effects that make climate fluctuate. So far, so good.

But what’s less well understood is how big the effect is. Doubling of CO2 in itself causes just 1.2 degrees warming. But the effect can be amplified by associated changes in water vapour and clouds. We don’t know how important these feedback processes are. The recent fifth report from the IPCC presents a spread of projections. But some things are clear. In particular, if annual CO2 emissions continue to rise unchecked, we risk triggering drastic climate change—leading to the devastating scenarios later in this century portrayed in the recent book by Naomi Oreskes and Erik Conway, and even perhaps the initiation of irreversible melting of the Greenland and Antarctic ice, which would eventually raise sea level by many metres.

Many still hope that we can segue towards a low-carbon future without trauma and disaster. But politician won’t gain much resonance by advocating a bare-bones approach that entails unwelcome lifestyle changes – especially if the benefits are far away and decades into the future. There are however three politically realistic measures that should be pushed. First, all countries could promote measures that actually save money – better energy-efficiency, insulating buildings better and so forth. Second, efforts could focus on the reduction of pollutants, methane and black carbon. These are minor contributors to global warming, but their reduction would (unlike that of CO2) have more manifest local side-benefits – especially in Asia. And third, there should be a step change in research into clean energy – who shouldn’t it be on a scale comparable to medical research?

The climate debate has been marred by too much blurring between the science, the politics and the commercial interests. Those who don’t like the implications of the IPCC projections have rubbished the science rather than calling for better science. But even if the science were clear-cut, there is wide scope for debate on the policy response. Those who apply a standard discount rate (as, for instance, Bjorn Lomberg’s Copenhagen Consensus recommendations do) are in effect writing off what happens beyond 2050. There is indeed little risk of catastrophe within that time-horizon, so unsurprisingly they downplay the priority of addressing climate change. But if you apply a lower discount rate – and in effect, don’t discriminate on grounds of data of birth, and care about those who’ll live into the 22nd century and beyond –  then you may deem it worth making an investment now, to protect those future generations against the worst-case scenario and to prevent triggering really long-term changes like the melting of Greenland’s ice.

So what will actually happen on the climate front? My pessimistic guess is that political efforts to decarbonise energy production won’t gain traction and that the CO2 concentration in the atmosphere will rise at an accelerating rate throughout the next 20 years. But by then we’ll know with far more confidence –  perhaps from advanced computer modelling but also from how much global temperatures have actually risen by then – just how strongly the feedback from water vapour and clouds amplifies the effect of CO2 itself in creating a ‘greenhouse effect’. If the effect is strong and the world’s climate consequently seems on a trajectory into dangerous territory, there may then be a pressure for ‘panic measures’. These would have to involve a ‘plan B’ –  being fatalistic about continuing dependence on fossil fuels but combatting its effects by some form of geoengineering.

The ‘greenhouse warming’ could be counteracted by (for instance) putting reflecting aerosols in the upper atmosphere or even vast sunshades in space. It seems feasible to throw enough material into the stratosphere to change the world’s climate –  indeed what is scary is that this might be within the resources of a single nation, or perhaps even a single corporation. The political problems of such geoengineering may be overwhelming. There could be unintended side- effects. Moreover, the warming would return with a vengeance of the countermeasures were ever discontinued; and other consequences of rising CO2 (especially the deleterious effects of ocean acidification) would be unchecked.

Geoengineering would be an utter political nightmare: not all nations would want to adjust the thermostat the same way. Very elaborate climatic modelling would be needed in order to calculate the regional impacts of any artificial intervention. (It would be a bonanza for lawyers if an individual or a nation could be blamed for bad weather!). Dan Schrag, who’ll be commenting later, is an expert on this topic. But as a non-expert I’d think it prudent to explore geoengineering techniques enough to clarify which options make sense, and perhaps damp down undue optimism about a technical ‘quick fix’ of our climate.

So we’re deep into what Paul Creutzen dubbed the ‘anthropocene’. We’re under long-term threat from anthropogenic global changes to climate and biodiversity – due to rising population, all more demanding of food, energy and other resources. All these issues are widely discussed. What’s depressing is the inaction – for politicians the immediate trumps the long-term; the parochial trumps the global. We need to ask whether nations need to give up more sovereignty to new organisations along the lines of IAEA, WHO, etc.

 

Threats from novel technology

But for the rest of this talk I’ll address a different topic – our vulnerability to powerful technologies – those we depend on today, and those that still seem futuristic, even science fiction. Unlike climate and environment these are still under-discussed.

Those of us with cushioned lives in the developed world fret too much about minor hazards: improbable air crashes, carcinogens in food, low radiation doses, and so forth. But we are less secure than we think. We (and our political masters) don’t worry enough about scenarios that have thankfully not yet happened – events that could arise as unexpectedly as the 2008 financial crisis, but which could cause world-wide disruption and deal shattering blows to our society.

We live in an interconnected world increasingly dependent on elaborate networks: electric-power grids, air traffic control, international finance, just-in-time delivery, globally-dispersed manufacturing and so forth. Unless these globalised networks are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns – real-world analogues of what happened in 2008 to the financial system.  Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumor, and psychic and economic contagion, literally at the speed of light.

The issues impel us to plan internationally. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness. And, by the way, the risk that pandemics could cause societal breakdown is far higher than in earlier centuries. English villages in the 14th century continued to function even when the Black Death halved their populations. In contrast, our societies would be vulnerable to breakdown as soon as hospitals overflowed and health services were overwhelmed – which would occur when the fatality rate was still a fraction of one percent. But the human cost would be worst in the shambolic but burgeoning megacities of the developing world

Advances in microbiology offer better prospects of containing such disasters. But the same research has downsides too. For instance, in 2012 researchers at Wisconsin and also at Erasmus University in Holland, showed that it was surprisingly easy to make an influenza virus both virulent and transmissible. When they published they were pressured to redact some details. And the Wisconsin group has been experimenting on H1N1, the virus that let do the catastrophic 1918 epidemic. Last month the US government decided to cease funding and impose a moratorium on so-called ‘gain of function’ experiments. The concern here was partly that it would be aiding terrorists, but partly also that if such experiments weren’t conducted everywhere to the very highest safety and containment standards, there would be a risk of bioerror.

It is hard to make a clandestine H-bomb. In contrast, millions will one day have the capability to misuse biotech, just as they can misuse cybertech today. In the 1970s, in the early days of recombinant DNA research, a group of biologists led by Paul Burg formulated the ‘Asilomar Declaration’, advocating a moratorium on certain types of experiments, and setting up guidelines. In retrospect, this move was perhaps over-cautious, but it seemed an encouraging precedent. However, the research community is far larger, far more broadly international, and far more influenced by commercial pressures.  Whatever regulations are imposed, on prudential or ethical grounds, they could never be enforced worldwide – any more than the drug laws can. Whatever can be done will be done by someone, somewhere.

In consequence, maybe the most intractable challenges to all governments will stem from the rising empowerment of tech-savvy groups (or even individuals), by bio or cyber technology that becomes potentially ever more devastating – to the extent that even one episode could be too many. This will aggravate the tension between freedom, privacy and security.

The results of releasing dangerous pathogens are so incalculable that bioterror isn’t likely to be deployed by extremist groups with well-defined political aims. But such concerns would not give pause to an eco-fanatic, empowered by the bio-hacking expertise that may soon be routine, who believes that ‘Gaia’ is being threatened by the presence of a few billion too many humans. That’s my worst nightmare. (Most devastating would be a potentially fatal virus that was readily transmissible and had a long latency period).

The global village will have its village idiots and they’ll have global range.

 

Looking beyond 2050

These concerns are relatively near-term. Trends beyond 2050 should make us even more anxious. I’ll venture a word about these – but a tentative word, because scientists have a rotten record as forecasters. Ernest Rutherford, the greatest nuclear physicist of his time, said in the 1930s that nuclear energy was ‘moonshine’. One of my predecessors as Astronomer Royal said, as late as the 1950s, that space travel was ‘utter bilge’. My own crystal ball is very cloudy.

In the latter part of the 21st century the world will be warmer and more crowded – that’s one of the few confident predictions.. But we can’t predict how our lives might then have been changed by novel technologies. After all, the speedy societal transformation brought about by the smartphone, the internet and their ancillaries would have seemed magic even 20 years ago. So, looking several decades ahead we must keep our minds open, or at least ajar, to prospects that may now seem science fiction.

The physicist Freeman Dyson foresees a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. I’d guess that this is comfortably beyond the ‘SF fringe’, but were even part of this scenario to come about, our ecology (and even our species) surely would not long survive unscathed.

But what about another fast-advancing technology: robotics and machine intelligence? Even back in the 1990s IBM’s ‘Deep Blue’ beat Kasparov, the world chess champion. More recently ‘Watson’ won a TV game show. Maybe a new-generation  ‘hyper computer’ could achieve oracular powers that offered its controller dominance of international finance and strategy.

Advances in software and sensors have been slower than in number-crunching capacity. Robots still can’t match the facility of a child in recognising and moving the pieces on a real chessboard. They can’t tie your shoelaces or cut your toenails. But machine learning and sensor technology are advancing apace. If robots could  observe and interpret their environment as adeptly as we do they would truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we to other people. And their greater processing speed may give them an advantage over us.

But will robots remain docile rather than ‘going rogue’? And what if a hyper-computer developed a mind of its own? If it could infiltrate the internet – and the internet of things – it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes – or even treat humans as an encumbrance.

Indeed, as early as the 1960s the British mathematician I J Good pointed out that a  super-intelligent robot (were it sufficiently versatile) could be the last invention that humans need ever make. Once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones.

Ray Kurzweil, now working at Google, is the leading evangelist for this so-called ‘singularity’. He thinks that humans could transcend biology by merging with computers, maybe losing their individuality and evolving into a common consciousness. In old-style spiritualist parlance, they would ‘go over to the other side’. But he’s worried that it may not happen in his lifetime. So he wants his body frozen until this nirvana is reached. I was once interviewed by a group of ‘cryonic’ enthusiasts –  in California (where else!)–  called the ‘society for the abolition of involuntary death’. They will freeze your body, so that when immortality’s on offer you can be resurrected. I said I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a ‘deathist’. (I was surprised to find that three Oxford professors were Cryonic enthusiasts. Two had paid the full whack; a third has taken the cut-price option of just having his head frozen).

In regard to all these speculations, we don’t know where the boundary lies between what may happen, and what will remain science fiction –  just as we don’t know whether to take seriously Freeman Dyson’s vision of bio-hacking by children. There are widely divergent views. Some experts, for instance Stuart Russell at Berkeley, and Demis Hassabis of Deep Mind think that the AI field, like synthetic biotech, already needs guidelines for ‘responsible innovation’. But others, like Rodney Brooks, think these concerns are ‘misguided’, and too far from realization to be worth worrying about. And the whole concept is philosophically contentious. – John Searle has an article in a recent NYRB dismissing the entire concept that a machine could have a mind of its own.

Be that as it may, it’s likely that before 2100, our society and its economy will be transformed by autonomous robots, even though these may be ‘idiot savants’ rather than displaying full human capabilities.

[Books like The Second Machine Age have addressed the economic and social disruption that will ensure when robots replace not just factory workers, but white-collar workers as well (even lawyers are under threat!).]

A short digression:

One context where robots surely have a future is in space. In the second part of this century the whole solar system will be explored by flotillas of miniaturized robots. And, on a larger scale, robotic fabricators may build vast lightweight structures floating in space (solar energy collectors, for instance), perhaps mining raw materials from asteroids.

These robotic advances will erode the practical case for human spaceflight. Nonetheless, I hope people will follow the robots, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. For instance SpaceX, led by Elon Musk, who also makes Tesla electric cars, has launched unmanned payloads and docked with the Space Station. He hopes soon to offer orbital flights to paying customers. Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon – voyaging further from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). I’m told they’ve sold a ticket for the second flight but not for the first flight. We should surely cheer on these private enterprise efforts in space – they can tolerate higher risks than a western government could impose on publicly-funded civilians, and thereby cut costs.

By 2100, groups of pioneers may have established ‘bases’ independent from the Earth – on Mars, or maybe on asteroids. Whatever ethical constraints we impose here on the ground, we should surely wish these adventurers good luck in using all the resources of genetic and cyborg technology to adapt themselves and their progeny to alien environments. This might be the first step towards divergence into a new species: the beginning of the post-human era. And it would also ensure that advanced life would survive, even if the worst conceivable catastrophe befell our planet.

But don’t ever expect mass emigration from Earth. Nowhere in our Solar system offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth’s problems.

And here on Earth we may indeed have a bumpy ride through this century. The scenarios I’ve described – environmental degradation, extreme climate change, or unintended consequences of advanced technology –  could trigger serious, even catastrophic, setbacks to our civilization. But they wouldn’t wipe us all out. They’re extreme, but strictly speaking not ‘existential’.

 

Truly existential risks?

Are there conceivable events that could snuff out all life? Promethian concerns of this kind were raised by scientists working on the atomic bomb project during the Second World War. Could we be absolutely sure that a nuclear explosion wouldn’t ignite all the world’s atmosphere or oceans?   Before the Trinity bomb test in New Mexico, Hans Bethe and two colleagues addressed this issue; they convinced themselves that there was a large safety factor. And luckily they were right. We now know for certain that a single nuclear weapon, devastating though it is, can’t trigger a nuclear chain reaction that would utterly destroy the Earth or its atmosphere.

But what about even more extreme experiments? Physicists were (in my view quite rightly) pressured to address the speculative ‘existential risks’ that could be triggered by powerful accelerators in Brookhaven and Geneva that generate unprecedented concentrations of energy.  Could physicists unwittingly convert the entire Earth into particles called ‘strangelets’ – or, even worse, trigger a ‘phase transition’ that would shatter the fabric of space itself?  Fortunately, reassurance could be offered: indeed I was one of those who pointed out that cosmic rays of much higher energies collide o frequently in the Galaxy, but haven’t ripped space apart. And they have penetrated white dwarf and neutron stars without triggering their conversion into ‘strangelets’.

But physicists should surely be circumspect and precautionary about carrying out experiments that generate conditions with no precedent even in the cosmos – just as biologists should avoid release of potentially-devastating genetically-modified pathogens.

So how risk-averse should we be? Some would argue that odds of 10 million to one against an existential disaster would be good enough, because that is below the chance that, within the next year, an asteroid large enough to cause global devastation will hit the Earth. (This is like arguing that the extra carcinogenic effects of artificial radiation are acceptable if it doesn’t so much as double the risk from natural radiation.) But to some, this limit may not seem stringent enough. If there were a threat to the entire Earth, the public might properly demand assurance that the probability is below one in a billion –  even one in a trillion –  before sanctioning such an experiment.

But can we meaningfully give such assurances? We may offer these odds against the Sun not rising tomorrow, or against a fair die giving 100 sixes in a row; that’s because we’re confident that we understand these things. But if our understanding is shaky – as it plainly is at the frontiers of physics –  we can’t really assign a probability, nor confidently assert that something is stupendously unlikely. It’s surely presumptuous to place extreme confidence in any theories about what happens when atoms are smashed together with unprecedented energy. If a congressional committee asked: ‘are you really claiming that there’s less than a one in a billion chance that you’re wrong?’ I’d feel uncomfortable saying yes.

But on the other hand, if a congressman went on to ask: “Could such an experiment disclose a transformative discovery that–  for instance – provided a new source of energy for the world?” I’d again offer high odds against it. The issue is then the relative likelihood of these two unlikely event – one hugely beneficial, the other catastrophic. Innovation is often hazardous, but if we don’t take risks we may forgo disproportionate benefits. Undiluted application of the ‘precautionary principle’ has a manifest downside. There is ‘the hidden cost of saying no’.

And, by the way, the priority that we should assign to avoiding truly existential disasters depends on an ethical question posed by (for instance) the philosopher Derek Parfit, which is this. Consider two scenarios: scenario A wipes out 90 percent of humanity; scenario B wipes out 100 percent. How much worse is B than A? Some would say 10 percent worse: the body count is 10 percent higher. But others would say B was incomparably worse, because human extinction forecloses the existence of billions, even trillions, of future people – and indeed an open-ended post-human future.

Especially if you accept the latter viewpoint, you’ll agree that existential catastrophes deserve more attention. That’s why some of us in (the other) Cambridge – both natural and social scientists –have inaugurated a research programme (the Centre for the Study of Existential Risks) to address these ‘existential’ risks, as well as the wider class of extreme risks I’ve discussed. We need to deploy the best scientific expertise to assess which alleged risks are pure science fiction, and which could conceivably become real; to consider how to enhance resilience against the more credible ones; and to warn against technological developments that could run out of control. . And there are similar efforts elsewhere: at Oxford in the UK here at MIT and in other places.

Moreover, we shouldn’t be complacent that all such probabilities are miniscule. We’ve no grounds for assuming that human-induced threats worse than those on our current risk register are improbable: they are newly emergent, so we have a limited time base for exposure to them and can’t be sanguine that we would survive them for long– nor about the ability of governments to cope if disaster strikes. Indeed we have zero grounds for confidence that we can survive the worst that future technologies could bring in their wake.

Technology bring with it great hopes, but also great fears. We mustn’t forget an important maxim: the unfamiliar is not the same as the improbable.

Another digression:

I’m often asked: is there a special perspective that astronomers can offer to science and philosophy? Having worked among them for many years, I have to tell you that contemplation of vast expanses of space and time doesn’t make astronomers serene and relaxed. They fret about everyday hassles as much as anyone. But they do have one special perspective –  an awareness of an immense future.

The stupendous timespans of the evolutionary past are now part of common culture (outside ‘fundamentalist’ circles, at any rate). But most people still somehow think we humans are the culmination of the evolutionary tree.  That hardly seems credible to an astronomer. Our Sun formed 4.5 billion years ago, but it’s got 6 billion more before the fuel runs out. And the expanding universe will continue –  perhaps forever –  destined to become ever colder, ever emptier. To quote Woody Allen, eternity is very long, especially towards the end.

Posthuman evolution –  here on Earth and far beyond – could  be as prolonged as the Darwinian evolution that’s led to us –  and even more wonderful. Any creatures witnessing the Sun’s demise 6 billion years hence won’t be human –  they’ll be as different from us as we are from a bug. Indeed evolution will be even faster than in the past – on a technological not a natural selection timescale.

Even in this ‘concertinered’ timeline –  extending billions of years into the future, as well as into the past –  this  century may be a defining moment where humans could jeopardise life’s immense potential. That’s why the avoidance of complete extinction has special resonance for an astronomer.

 

Obligations of scientists

Finally, a few thoughts of special relevance to my hosts in STS. Sheila Jasinoff and others have discussed the obligations of scientists when their investigations have potential social, economic and ethical impacts that concern all citizens. These issues are starkly relevant to the theme of this talk. So I’d like, before closing, to offer some thoughts – though with diffidence in front of this audience. It’s important to keep ‘clear water’ between science and policy. Risk assessment should be separate from risk management. Scientists should present policy options based on a consensus of expert opinion; but if they engage in advocacy they should recognise that on the economic, social and ethical aspects of any policy they speak as citizens and not as experts – and will have a variety of views.

I’d highlight some fine exemplars from the past: for instance, the atomic scientists who developed the first nuclear weapons during World War II. Fate had assigned them a pivotal role in history. Many of them – - men such as Jo Rotblat, Hans Bethe, Rudolf Peierls and John Simpson (all of who I was privileged to know in their later years) –  returned with relief to peacetime academic pursuits. But the ivory tower wasn’t, for them, a sanctuary. They continued not just as academics but as engaged citizens – - promoting efforts to control the power they had helped unleash, through national academies, the Pugwash movement, and other bodies.

They were the alchemists of their time, possessors of secret specialized knowledge. The technologies I’ve discussed today have implications just as momentous as nuclear weapons. But in contrast to the ‘atomic scientists’, those engaged with the new challenges  span almost all the sciences, are broadly international – and work in the commercial as well as public sector.

But they all have a responsibility. You would be a poor parent if you didn’t care what happened to your children in adulthood, even though you may have little control over them. Likewise, scientists shouldn’t be indifferent to the fruits of their ideas – their creations.  They should try to foster benign spin-offs – commercial or otherwise. They should resist, so far as they can, dubious or threatening applications of their work, and alert politicians when appropriate. We need to foster a culture of ‘responsible innovation’, especially in fields like biotech, advanced AI and geoengineering.

But, more than that, choices on how technology is applied – what to prioritise, and what to regulate –  require wide public debate, and such debate must be informed and leveraged by ‘scientific citizens’ – who will have a range of political perspectives, They can do this via campaigning groups, via blogging and journalism, or through political activity. There is a role for national academies too.

A special obligation lies on those in academia or self-employed entrepreneurs –  they have more freedom to engage in public debate than those employed in government service or in industry. (Academics have a special privilege to influence students. Polls show, unsurprisingly, that younger people, who expect to survive most of the century, are more engaged and anxious about long-term and global issues – we should respond to their concerns.)

More should be done to assess, and then minimize, the extreme risks I’ve addressed. But though we live under their shadow, there seems no scientific impediment to achieving a sustainable and secure world, where all enjoy a lifestyle better than those in the ‘west’ do today. We can be technological optimists, even though the balance of effort in technology needs redirection – and to be guided by values that science itself can’t provide. But the intractable politics and sociology –  the gap between potentialities and what actually happens –  engenders pessimism. Politicians look to their own voters – and the next election. Stockholders expect a pay-off in the short run. We downplay what’s happening even now in far-away countries. And we discount too heavily the problems we’ll leave for new generations. Without a broader perspective – without realizing that we’re all on this crowded world together – governments won’t properly prioritise projects that are long-term in a political perspectives, even if a mere instant in the history of our planet.

“Space-ship Earth” is hurtling through space. Its passengers are anxious and fractious. Their life-support system is vulnerable to disruption and break-downs. But there is too little planning too little horizon-scanning, too little awareness of long-term risks.

There needs to be a serious research programme, involving natural and social scientists, to compile a more complete register of these ‘extreme risks’, and to enhance resilience against the more credible ones. The stakes are so high that those involved in this effort will have earned their keep even if they reduce the probability of a catastrophe by one in the sixth decimal place.

I’ll close with a reflection on something close to home, Ely Cathedral. This overwhelms us today. But think of its impact  900 years ago –  think of the vast enterprise its construction entailed. Most of its builders had never travelled more than 50 miles. The fens were their world. Even the most educated knew of essentially nothing beyond Europe. They thought the world was a few thousand years old –  and that it might not last another thousand.

But despite these constricted horizons, in both time and space –  despite the deprivation and harshness of their lives –  despite their primitive technology and meagre resources –  they built this huge and glorious building –  pushing the boundaries of what was possible. Those who conceived it knew they wouldn’t live to see it finished. Their legacy still elevates our spirits, nearly a millennium later.

What a contrast to so much of our discourse today! Unlike our forebears, we know a great deal about our world –  and indeed about what lies beyond. Technologies that our ancestors couldn’t have conceived enrich our lives and our understanding. Many phenomena still make us fearful, but the advance of science spares us from irrational dread.  We know that we are stewards of a precious ‘pale blue dot’ in a vast cosmos –  a planet with a future measured in billions of years –  whose fate depends on humanity’s collective actions this century.

But all too often the focus is short term and parochial. We downplay what’s happening even now in impoverished far-away countries. And we give too little thought to what kind of world we’ll leave for our grandchildren.

In today’s runaway world, we can’t aspire to leave a monument lasting a thousand years, but it would surely be shameful if we persisted in policies that denied future generations a fair inheritance and left them with a more depleted and more hazardous world. Wise choices will require the idealistic and effective efforts of natural scientists, environmentalists, social scientists and humanists – all guided by the knowledge that 21st century science can offer. And by values that science alone can’t provide.

But we mustn’t leap from denial to despair. So, having started with H G Wells, I give the final word to another secular sage, the great immunologist Peter Medawar.

“The bells that toll for mankind are like the bells of Alpine cattle. They are attached to our own necks, and it must be our fault if they do not make a tuneful and melodious sound.”

Martin Rees is a Fellow of Trinity College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge. He is also the chair of the Longitude Prize committee, a £10m reward for helping to combat antibiotic resistance, which is now open for submissions. Details here: longitudeprize.org. A version of this lecture was first delivered at the Harvard School of Government on 6 Nov 2014.

© MARK PETERSON/REDUX/EYEVINE
Show Hide image

Goodbye to the Confederate flag

After the shootings in Charleston, the Republican right showed it was finally ready to reject the old symbols of the Confederacy.

On 27 June, an African-American activist named Bree Newsome woke up before dawn, put on her climbing equipment and scaled a 30-foot flagpole on the lawn of State House in Columbia, South Carolina. She then removed the Confederate battle flag that flew from it. “We can’t wait any longer,” she explained later in an online statement. “It’s time for a new chapter where we are sincere about dismantling white supremacy.”

After she was led away in handcuffs, the flag was raised again.

Newsome’s protest reflected a growing impatience within America’s black community and anger about liberal inaction. Political rallies by the Democratic presidential contenders Hillary Clinton and Bernie Sanders have been disrupted by the Black Lives Matter campaign against violence committed on young African Americans and the cultural and legal biases that justify it. While promoting his book on race in the US, the writer Ta-Nehisi Coates argued that, to African Americans, the battle flag represents a lingering attempt “to bury the fact that half this country thought it was a good idea to raise an empire rooted in slavery”.

Yet, on this matter, to everyone’s surprise, the black civil rights movement and many southern Republicans have proved to be of one mind. On 9 July the House of Representatives in South Carolina voted to lower the battle flag for good. It stood, representatives said, for racism. It had to go.

The context of this agreement was a painful one. Ten days before Newsome’s act, a 21-year-old white man named Dylann Roof shot and killed nine black worshippers at the Emanuel African Methodist Episcopal Church in Charleston, South Carolina. According to his room-mate, he wanted to start a race war. The TV screens showed a photo of him holding a gun in one hand and a Confederate battle flag in the other.

If the demands for redress made by civil rights groups didn’t come as a surprise, conservative acquiescence did. The Republican Party had built a solid base in the South by courting white voters who cherished the memory of the Confederacy. Yet the party’s presidential hopefuls from both the North and the South – including Jeb Bush, Lindsey Graham, Scott Walker and George Pataki – said that the battle flag ought to be lowered. The most striking intervention was made by the governor of South Carolina, Nikki Haley, who denounced the use of the Confederate flag and signed the bill removing it. Haley is now tipped to figure on the list of potential vice-presidential nominees.

The volte-face of the US right is in part a result of the horror of the Charleston shootings. Yet it also occurs in the context of major shifts within American society. There are still many conservatives who will defend Confederate heritage as a matter of southern pride but the culture wars are changing as the US becomes increasingly European in outlook. This is taking place across the country. It just happens to be more pronounced in the South because no other region has fought so violently and so long to resist the liberal tide.

The story of the battle flag is the story of the South. The first official Confederate flag used in the civil war of 1861-65 caused confusion during fighting – through the haze of gun smoke, its design of 13 stars and red and white bars was hard to distinguish from the Stars and Stripes. An alternative blue cross was rejected for being too sectarian; the racist Confederacy was anxious not to offend its Jewish citizens. So the cross became a diagonal X. This flag was never officially adopted by the Confederate army. In the years after the war its use was infrequent.

There was little need to visualise southern difference in a flag. It was self-evident in the physical signs of racial segregation: separate schools, pools and drinking fountains; black people confined to the back of the bus. Political displays of the battle flag of Dixie (the historical nickname for the states that seceded from the Union) only really resurfaced when that racial order was challenged by northern liberals. In 1948, the Democrats – then the party overwhelmingly in control of the South – split over modest calls for civil rights. The conservatives who refused to support that year’s presidential ticket, the “Dixiecrats”, triggered a rev­ival of flag-waving across the region.

The old battle flag suddenly appeared on private lawns, on cars and at political rallies. Supposedly ancient cultural traditions were invented overnight. For instance, the 1948 student handbook of the University of Mississippi confessed: “Many Ole Miss customs are fairly new; they lack only the savouring which time brings . . . Ole Miss has adopted the Confederate flag as a symbol of the Mississippi spirit. Each football game finds the scarlet flag frantically waving to the rhythm of the Rebel band.”

I can confirm that this “tradition” was still going as recently as in 2005. That year, I attended an American football game at Ole Miss and was surprised when the band played “Dixie” at the end. White boys and white girls stood up and belted out the folk song of the Confederacy, while black students filed out.

In 1958, South Carolina made it a crime to desecrate the battle flag. Three years later, on the 100th anniversary of the outbreak of the civil war, it was hoisted above its Capitol building in Columbia. That day, there was a struggle in the US Congress to keep federal funding going for segregated schools.

So clear is the link between the postwar white resistance to civil rights and the battle flag that many see it as the symbolic equivalent of the N-word. Jack Hunter, the editor of the conservative website Rare Politics, says: “Some people insist that it’s not about racism, not about slavery, not about segregation. But it’s about all those things.” Hunter grew up in Charleston and used to skateboard in the car park of the church that Dylann Roof attacked. When he was a young journalist, he appeared on local radio as a rabidly right-wing masked character called “the Southern Avenger”. His past was exposed in 2013 while he was working for Rand Paul, a Republican presidential candidate, and Hunter stepped down from his position. He publicly renounced his youthful association with racial conservatism. He now eschews any romanticism about the Confederate cause and its demand for states’ rights. “States’ rights to do what?” he asks: the right to discriminate against African Americans? He is glad that the State House flag is gone. He ascribes its longevity to ignorance, which was corrected by Roof’s rampage: “It was the first time that [southern Republicans] were able to see a different perspective on this symbol.”

Not everyone agrees. Richard Hines – a former South Carolina legislator, Reagan campaign state co-chair and senior activist with the Sons of Confederate Veterans – insists that the flag is “an enduring symbol of the southern fighting man”. Indeed, a poll in July found that 57 per cent of Americans think it stands for southern heritage, rather than racism. Yet that heritage has a political dimension. “Southern people are proud of who they are and there is a leftist assault to destroy the best part of America,” Hines says. “The Trotskyite elite in control of the establishment wants to root out the southern tradition” – a tradition of religious devotion, chivalry and military honour. It is possible to cast the battle flag as a pawn in a much larger cultural conflict.

In 2000, civil rights activists lobbied hard to get the battle flag removed from the top of the South Carolina Capitol and succeeded in having it shrunk in size and relocated to the grounds of State House. The issue came up in that year’s Republican presidential primaries – an unusually poisonous contest between George W Bush and John McCain. Supporters of Bush put out a false story that McCain had fathered an interracial child out of wedlock. McCain added to his woes by opining that the battle flag was “a symbol of racism and slavery”. An organisation called Keep It Flying flooded the state with 250,000 letters attacking him and he lost the crucial competition here to Bush.

The battle flag has retained a strong emotional power for a long time. This makes the Republican establishment’s abandonment of the flag all the more surprising. Then again, those who run the South are probably the people most likely to grasp how much the region has changed in just a decade.

***

In 2010 I took a trip through North Carolina. The landscape told a story. Dotted along the roadside were abandoned black buildings, the old tobacco sheds. The decline of the rural economy had rendered them obsolete. Over the fields that would once have been full of farmers were freshly tarmacked roads, stretching out to nowhere. My guide explained that these were supposed to be cul-de-sacs for new houses. North Carolina was going through a property boom. But who was going to buy all those homes, I asked? The answer: damn Yankees.

Demography is destiny. This once agri­cultural region developed fast from the 1960s onwards by keeping union membership, taxes and regulation as low as possible. Yet capitalism proved disastrous for southern conservatism. Northerners flooded in, seeking work or retirement and bringing their own values. The forecast is that North Carolina’s Research Triangle – the South’s Silicon Valley – will grow by 700,000 jobs and 1.2 million people in two decades.

White migration was accompanied by an influx of Spanish speakers as the service sector flourished. Between 2000 and 2010, the white share of the population of North Carolina fell from 70 to 65 per cent. The black proportion remained at roughly 21 per cent. The Latino proportion, however, jumped from 4.7 per cent to 8.4 per cent. Today, the proportion of people who are non-white and over 60 is about a third. But it’s approaching nearly half for those under 18. As a result, politics in the South is no longer biracial: a contest between white and black. It is increasingly multiracial and uncoupled from the region’s complex past.

The impact of these changes is reflected in voting patterns. In 2000, the South was still overwhelmingly Republican in presidential contests. Even the Democratic nominee, Al Gore, a southerner, lost his home state of Tennessee. But in 2008 and 2012, Barack Obama took those states with the fastest-changing demographics: Florida and Virginia. He won North Carolina in 2008 and lost it in 2012 – but by less than 100,000 votes. It is true that the Republicans won back control in the 2014 midterm elections, with the result that the Deep South now sends few Democrats to Congress; but the region’s political masters are not quite as traditional-minded as they once were.

The Republican relationship with the Confederate past is complex. As the party of Abraham Lincoln and the Union, the GOPs’ southern support was historically small. But in the 1960s the national Democratic Party embraced civil rights and alienated its once loyal southern following; the Republicans took the opportunity to steal some conservative white voters.

The growing southern Republican vote had a class component. Its success in local and congressional races was built more on winning over middle-class moderates than on appealing to the working-class racists who filled the ranks of the Ku Klux Klan. The southern Republican Party did enthusiastically embrace the Confederate battle flag in many quarters. But some office-holders did so only with ambiguity, while large sections of the party never identified with it at all. The period of Republican ascendancy in the South was, in reality, linked with a softening of the area’s racial politics.

Two of the Republicans’ current southern stars are Indian Americans: Bobby Jindal, the governor of Louisiana, and Nikki Haley, the anti-flag governor of South Carolina. There are just two black people in the US Senate and one of them is a Republican, the Tea Party-backed senator for South Carolina, Tim Scott. Marco Rubio, the Floridian senator and presidential candidate, is Cuban American, and the former Florida governor Jeb Bush is married to a Mexican-born woman and speaks fluent Spanish. Bush has tried to push a more moderate line on immigration, in deference to how the GOP will struggle to win the White House if it appeals only to angry white voters. The Kentucky libertarian senator Rand Paul, Jack Hunter’s former boss, has called for legal reforms to correct the trend of keeping far more black than white people in prison. And he is not the only Republican to have been moved by recent race riots sparked by police violence.

***

Violence on the streets of Ferguson, Missouri, and Baltimore, Maryland, confirmed that there still is a culture war in the US. Yet its character has changed. In the past, civil disturbances were typically leapt upon by conservative politicians as evidence of social decline. The 1992 LA riots were blamed on single parenthood and rap lyrics. In contrast, conservative leaders today are far more likely to acknowledge the problems of white racism. There is no place in their ranks for the likes of Dylann Roof. White supremacists are tiny in number.

Jack Hunter claims: “The KKK is like 12 guys in a telephone booth. Liberal groups will use their threat for fundraising but it doesn’t exist. It hasn’t properly since the 1960s.” Roof’s actions say more about gun control, mental illness and the angst of the young than they do about popular, largely liberal views on race, as polling shows.

We can see a similar liberal shift in other areas of the historic culture war. In May 2015 Gallup released the results of a “moral acceptability” survey charting changes in national attitude across all age groups, from 2001 to 2015. Approval of gay relationships jumped from 40 to 63 per cent; having a baby out of wedlock from 45 to 61 per cent; sex between unmarried men and women from 53 to 68 per cent; doctor-assisted suicide from 49 to 56 per cent; even polygamy went from 7 to 16 per cent. Abortion remained narrowly disapproved of: support for access has only crept up from 42 to 45 per cent. This is probably a result of an unusual concentration of political and religious opposition and because it involves a potential life-or-death decision. But the general trend is that young people just don’t care as much about what consenting adults get up to.

Why? It might be because old forms of identity are dying. One way of measuring that is religious affiliation. From 2007 to 2014, according to Pew Research, the proportion of Americans describing themselves as Christian fell from 78 to 71 per cent. Today, only a quarter of the population is evangelical and 21 per cent Catholic, down despite high immigration. Then there is the decline in civic or communal activity. Since 2012, the organisers of Nascar, the stock-car races, have not published attendance figures at their tracks, probably because they have fallen so sharply. The decline of this most macho and working class of sports parallels the fall in conservative forms of collective identity such as southern traditionalism.

The old culture war was, like the racial politics of the old South, binary. In the 1950s, around the same time as the South invented its tradition of flying the battle flag in colleges, the US constructed an ideal of the “normal” nuclear family unit: straight, white, patriarchal, religious. On the other side was the “abnormal”: gay, black, feminist, atheist, and the rest. The surest way to get elected in the US between 1952 and 2004 was to associate yourself with the economic needs and cultural prejudices of the majority. The approach was once summed up by a Richard Nixon strategist thus: split the country in two and the Republicans will take the larger half. But that is changing. The old normal is no longer the cultural standard but just one of many identities to choose from. The races are mixing. Women want to work more and have children later in life, possibly without marriage. Many religious people are having to rethink their theology when a child comes out as gay. And the enforcers of the old ways – the unions, churches or political parties – are far less attractive than the atomising internet.

***

Politicians are scrabbling to keep up with the diffusion of American identity. Democrats got lucky when they nominated Barack Obama and chose a presidential candidate who reflected the fractured era well: interracial, non-denominational Christian, and so on. In the 2012 presidential race the Republicans got burned when they tried to play the old culture war card on abortion. They won’t repeat that mistake. After the Supreme Court legalised gay marriage across the country in June, the right’s response was not as uniformly loud and outraged as it would have been in the past. Some protested, but serious presidential contenders such as Jeb Bush grasped the implications of the defeat. There is a cultural and political realignment going on and no one is sure where it will lead. It’s encouraging caution among the Republican top brass. It is time, they think, to abandon lost causes.

The death of southern traditionalism is part of the ebb and flow of cultural history. Identities flourish and die. As political fashions change, you find the typically American mix of triumph on one side and jeremiad on the other. Richard Hines stood vigil as the battle flag was lowered in Columbia and noted with disgust the presence of what he described as “bussed-in” activists. “They pulled out all these gay pride flags and started shouting, ‘USA, USA, USA!’ It reminded me of the Bolshevik Revolution.”

Hines reckons that more southerners will now fly the flag than ever before and says he has attended overflow rallies of ordinary folks who love their region. He may well be correct. The faithful will keep the old Confederate standard fluttering on their lawns – an act of secession from the 21st century. But in the public domain, the battle flag is on its way down and in its place will be raised the standard of the new America. The rainbow flag flutters high. For now.

Tim Stanley is a historian and a columnist for the Telegraph

This article first appeared in the 20 August 2015 issue of the New Statesman, Corbyn wars