French Guiana's Amazonia region. What happens here affects the climate of the entire world. Photo: Jody Amiet/AFP/Getty
Show Hide image

Martin Rees: The world in 2050 and beyond

In today’s runaway world, we can’t aspire to leave a monument lasting 1,000 years, but it would surely be shameful if we persisted in policies that denied future generations a fair inheritance and left them with a more depleted and more hazardous world.

I’ll start with a flashback to 1902. In that year the young H G Wells gave a celebrated lecture at the Royal Institution in London. He spoke mainly in visionary mode. “Humanity,” he proclaimed, “has come some way, and the distance we have travelled gives us some earnest of the way we have to go. All the past is but the beginning of a beginning; all that the human mind has accomplished is but the dream before the awakening.” His rather purple prose still resonates more than 100 years later – he realised that we humans aren’t the culmination of emergent life.

But Wells wasn’t an optimist. He also highlighted the risk of global disaster: “It is impossible to show why certain things should not utterly destroy and end the human story... and make all our efforts vain... something from space, or pestilence, or some great disease of the atmosphere, some trailing cometary poison, some great emanation of vapour from the interior of the earth, or new animals to prey on us, or some drug or wrecking madness in the mind of man”.

I quote Wells because he reflects the mix of optimism and anxiety – and of speculation and science – which I’ll try to offer in this lecture. Were he writing today, he would have been elated by our expanded vision of life and the cosmos –  but he’d have been even more anxious about the perils we might face. The stakes are indeed getting higher: new science offers huge opportunities but its consequences could jeopardise our survival. Many are concerned that it is ‘running away’ so fast that neither politicians nor the lay public can assimilate or cope with it.

My own expertise is in astronomy and space technology, so you may imagine that I’m kept awake at night by worry about asteroid impacts. Not so. Indeed this is one of the few threats that we can quantify. Every ten million years or so, a body a few kilometres across will hit the earth, causing global catastrophe – there’s a few chances in a million that this is how we’ll die. But there are larger numbers of smaller asteroids that could cause regional or local devastation. A body of say 300m across, if it fell into the Atlantic, would produce huge tsunamis that would devastate the east coast of the US, as well as much of Europe. And still smaller impacts are more frequent. One in Siberia in 1908 released energy equivalent to 5 megatons.

Can we be forewarned of these impacts? The answer is yes. There are plans to survey the million potential earth-crossing asteroids bigger than 50m and track their orbits precisely enough to predict possible impacts. With forewarning of an impact, action could be taken to evacuate the most vulnerable areas. Even better news is that during this century we could develop the technology to protect us. A ‘nudge’, imparted a few years before the threatened impact, would only need to change an asteroid’s velocity by a millimetre per second in order to deflect its path away from the earth.

If you calculate an insurance premium in the usual way, by multiplying probability by consequences, it turns out that it is worth spending a billion dollars a year to reduce asteroid risk

Other natural threats – earthquakes and volcanoes – are less predictable. But there’s one reassuring thing about them, as there is about asteroids: the annual risk they pose isn’t getting bigger. It’s the same for us as it was for the Neanderthals – or indeed for the dinosaurs.

 

Human-induced threats

In contrast, the hazards that are the focus of this talk are those that humans themselves engender – and they now loom far larger. And in discussing them I’m straying far from my ‘comfort zone’ of expertise. So I comment as a ‘citizen scientist’, and as a worried member of the human race. I’ll skate over a range of topics, in the hope of being controversial enough to provoke discussion.

Ten years ago I wrote a book that I entitled Our Final Century? My publisher deleted the question-mark. The American publishers changed the title to Our Final Hour (Americans seek instant gratification).

My theme was this. Earth is 45 million centuries old. But this century is the first when one species – ours – can determine the biosphere’s fate. I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating setbacks. That’s because of unsustainable anthropogenic stresses to ecosystems, because there are more of us (world population is higher) and we’re all more demanding of resources. And – most important of all – because we’re empowered by new technology, which exposes us to novel vulnerabilities.

And we’ve had one lucky escape already.

At any time in the Cold War era –  when armament levels escalated beyond all reason – the superpowers could have stumbled towards Armageddon through muddle and miscalculation. During the Cuba crisis I and my fellow-students participated anxiously in vigils and demonstrations. But we would have been even more scared had we then realised just how close we were to catastrophe. Kennedy was later quoted as having said at one stage that the odds were ‘between one in three and evens’. And only when he was long retired did Robert McNamara state frankly that “[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.” Be that as it may, we were surely at far greater hazard from nuclear catastrophe than from anything nature could do. Indeed the annual risk of thermonuclear destruction during the Cold War was about 10,000 times higher than from asteroid impact.

It is now conventionally asserted that nuclear deterrence worked. In a sense, it did. But that doesn’t mean it was a wise policy. If you play Russian roulette with one or two bullets in the barrel, you are more likely to survive than not, but the stakes would need to be astonishing high –  or the value you place on your life inordinately low –  for this to seem a wise gamble. But we were dragooned into just such a gamble throughout the Cold War era. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most European citizens would have accepted, if they’d been asked to give informed consent. For my part, I would not have chosen to risk a one in three – or even one in six – chance of a disaster that would have killed hundreds of millions and shattered the historic fabric of all our cities, even if the alternative were certain Soviet dominance of Western Europe. And of course the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat especially if a nuclear winter were triggered.

The threat of global annihilation involving tens of thousands of H-bombs is thankfully in abeyance; there is, though, currently more risk that smaller nuclear arsenals might be used in a regional context, or even by terrorists. But we can’t rule out, later in the century, a geopolitical realignment leading to a standoff between new superpowers. So a new generation may face its own “Cuba” – and one that could be handled less well or less luckily than the 1962 crisis was.

Nuclear weapons are based on 20th century science. I’ll return later in my talk to the 21st century sciences- bio, cyber, and AI – and what they might portend.

But before that let’s focus on the potential devastation that could be wrought by human-induced environmental degradation and climate change. These threats are long-term and insidious. They stem from humanity’s ever-heavier collective ‘footprint’, which threatens to  stress our finite planet’s ecology beyond sustainable limits…

There’s nothing new about these concerns. Doom-laden predictions of environmental catastrophe famously came in the 1970s from the Club of Rome, Paul Erlich and other groups. These proved wide of the mark. Unsurprisingly, such memories engender scepticism about the worst-case environmental and climatic projections. But the hazards may merely have been postponed – the pressures are now far higher.

For one thing, the world is more crowded. Fifty years ago, world population was below 3 billion. It now exceeds 7 billion. And by 2050 it’s projected to be between 8.5 and 10 billion, the growth being mainly in Africa and India. We must hope for a demographic transition in those countries whose populations are still rising fast, because the higher the post-2050 population becomes, the greater will be all pressures on resources (especially if the developing world narrows its gap with the developed world in its per capita consumption).

Humans already appropriate around 40 per cent of the world’s biomass and that fraction is growing. The resultant ecological shock could irreversibly impoverish our biosphere. Extinction rates are rising: we’re destroying the book of life before we’ve read it. Biodiversity is a crucial component of human wellbeing. We’re clearly harmed if fish stocks dwindle to extinction; there are plants in the rainforest whose gene pool might be useful to us. But for many environmentalists these ‘instrumental’ – and anthropocentric – arguments aren’t the only compelling ones. For them there are further ethical issues: preserving the richness of our biosphere has value in its own right over and above what it means to us humans.

Pressures on food supplies and on the entire biosphere will be aggravated by climate change. And climate change exemplifies the tension between the science, the public and the politicians. One thing isn’t controversial. The atmospheric CO2 concentration is rising – and this is mainly due to the burning of fossil fuels. Straightforward physics tells us that this build-up will induce a long-term warming trend, superimposed on all the other complicated effects that make climate fluctuate. So far, so good.

But what’s less well understood is how big the effect is. Doubling of CO2 in itself causes just 1.2 degrees warming. But the effect can be amplified by associated changes in water vapour and clouds. We don’t know how important these feedback processes are. The recent fifth report from the IPCC presents a spread of projections. But some things are clear. In particular, if annual CO2 emissions continue to rise unchecked, we risk triggering drastic climate change—leading to the devastating scenarios later in this century portrayed in the recent book by Naomi Oreskes and Erik Conway, and even perhaps the initiation of irreversible melting of the Greenland and Antarctic ice, which would eventually raise sea level by many metres.

Many still hope that we can segue towards a low-carbon future without trauma and disaster. But politician won’t gain much resonance by advocating a bare-bones approach that entails unwelcome lifestyle changes – especially if the benefits are far away and decades into the future. There are however three politically realistic measures that should be pushed. First, all countries could promote measures that actually save money – better energy-efficiency, insulating buildings better and so forth. Second, efforts could focus on the reduction of pollutants, methane and black carbon. These are minor contributors to global warming, but their reduction would (unlike that of CO2) have more manifest local side-benefits – especially in Asia. And third, there should be a step change in research into clean energy – who shouldn’t it be on a scale comparable to medical research?

The climate debate has been marred by too much blurring between the science, the politics and the commercial interests. Those who don’t like the implications of the IPCC projections have rubbished the science rather than calling for better science. But even if the science were clear-cut, there is wide scope for debate on the policy response. Those who apply a standard discount rate (as, for instance, Bjorn Lomberg’s Copenhagen Consensus recommendations do) are in effect writing off what happens beyond 2050. There is indeed little risk of catastrophe within that time-horizon, so unsurprisingly they downplay the priority of addressing climate change. But if you apply a lower discount rate – and in effect, don’t discriminate on grounds of data of birth, and care about those who’ll live into the 22nd century and beyond –  then you may deem it worth making an investment now, to protect those future generations against the worst-case scenario and to prevent triggering really long-term changes like the melting of Greenland’s ice.

So what will actually happen on the climate front? My pessimistic guess is that political efforts to decarbonise energy production won’t gain traction and that the CO2 concentration in the atmosphere will rise at an accelerating rate throughout the next 20 years. But by then we’ll know with far more confidence –  perhaps from advanced computer modelling but also from how much global temperatures have actually risen by then – just how strongly the feedback from water vapour and clouds amplifies the effect of CO2 itself in creating a ‘greenhouse effect’. If the effect is strong and the world’s climate consequently seems on a trajectory into dangerous territory, there may then be a pressure for ‘panic measures’. These would have to involve a ‘plan B’ –  being fatalistic about continuing dependence on fossil fuels but combatting its effects by some form of geoengineering.

The ‘greenhouse warming’ could be counteracted by (for instance) putting reflecting aerosols in the upper atmosphere or even vast sunshades in space. It seems feasible to throw enough material into the stratosphere to change the world’s climate –  indeed what is scary is that this might be within the resources of a single nation, or perhaps even a single corporation. The political problems of such geoengineering may be overwhelming. There could be unintended side- effects. Moreover, the warming would return with a vengeance of the countermeasures were ever discontinued; and other consequences of rising CO2 (especially the deleterious effects of ocean acidification) would be unchecked.

Geoengineering would be an utter political nightmare: not all nations would want to adjust the thermostat the same way. Very elaborate climatic modelling would be needed in order to calculate the regional impacts of any artificial intervention. (It would be a bonanza for lawyers if an individual or a nation could be blamed for bad weather!). Dan Schrag, who’ll be commenting later, is an expert on this topic. But as a non-expert I’d think it prudent to explore geoengineering techniques enough to clarify which options make sense, and perhaps damp down undue optimism about a technical ‘quick fix’ of our climate.

So we’re deep into what Paul Creutzen dubbed the ‘anthropocene’. We’re under long-term threat from anthropogenic global changes to climate and biodiversity – due to rising population, all more demanding of food, energy and other resources. All these issues are widely discussed. What’s depressing is the inaction – for politicians the immediate trumps the long-term; the parochial trumps the global. We need to ask whether nations need to give up more sovereignty to new organisations along the lines of IAEA, WHO, etc.

 

Threats from novel technology

But for the rest of this talk I’ll address a different topic – our vulnerability to powerful technologies – those we depend on today, and those that still seem futuristic, even science fiction. Unlike climate and environment these are still under-discussed.

Those of us with cushioned lives in the developed world fret too much about minor hazards: improbable air crashes, carcinogens in food, low radiation doses, and so forth. But we are less secure than we think. We (and our political masters) don’t worry enough about scenarios that have thankfully not yet happened – events that could arise as unexpectedly as the 2008 financial crisis, but which could cause world-wide disruption and deal shattering blows to our society.

We live in an interconnected world increasingly dependent on elaborate networks: electric-power grids, air traffic control, international finance, just-in-time delivery, globally-dispersed manufacturing and so forth. Unless these globalised networks are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns – real-world analogues of what happened in 2008 to the financial system.  Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumor, and psychic and economic contagion, literally at the speed of light.

The issues impel us to plan internationally. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness. And, by the way, the risk that pandemics could cause societal breakdown is far higher than in earlier centuries. English villages in the 14th century continued to function even when the Black Death halved their populations. In contrast, our societies would be vulnerable to breakdown as soon as hospitals overflowed and health services were overwhelmed – which would occur when the fatality rate was still a fraction of one percent. But the human cost would be worst in the shambolic but burgeoning megacities of the developing world

Advances in microbiology offer better prospects of containing such disasters. But the same research has downsides too. For instance, in 2012 researchers at Wisconsin and also at Erasmus University in Holland, showed that it was surprisingly easy to make an influenza virus both virulent and transmissible. When they published they were pressured to redact some details. And the Wisconsin group has been experimenting on H1N1, the virus that let do the catastrophic 1918 epidemic. Last month the US government decided to cease funding and impose a moratorium on so-called ‘gain of function’ experiments. The concern here was partly that it would be aiding terrorists, but partly also that if such experiments weren’t conducted everywhere to the very highest safety and containment standards, there would be a risk of bioerror.

It is hard to make a clandestine H-bomb. In contrast, millions will one day have the capability to misuse biotech, just as they can misuse cybertech today. In the 1970s, in the early days of recombinant DNA research, a group of biologists led by Paul Burg formulated the ‘Asilomar Declaration’, advocating a moratorium on certain types of experiments, and setting up guidelines. In retrospect, this move was perhaps over-cautious, but it seemed an encouraging precedent. However, the research community is far larger, far more broadly international, and far more influenced by commercial pressures.  Whatever regulations are imposed, on prudential or ethical grounds, they could never be enforced worldwide – any more than the drug laws can. Whatever can be done will be done by someone, somewhere.

In consequence, maybe the most intractable challenges to all governments will stem from the rising empowerment of tech-savvy groups (or even individuals), by bio or cyber technology that becomes potentially ever more devastating – to the extent that even one episode could be too many. This will aggravate the tension between freedom, privacy and security.

The results of releasing dangerous pathogens are so incalculable that bioterror isn’t likely to be deployed by extremist groups with well-defined political aims. But such concerns would not give pause to an eco-fanatic, empowered by the bio-hacking expertise that may soon be routine, who believes that ‘Gaia’ is being threatened by the presence of a few billion too many humans. That’s my worst nightmare. (Most devastating would be a potentially fatal virus that was readily transmissible and had a long latency period).

The global village will have its village idiots and they’ll have global range.

 

Looking beyond 2050

These concerns are relatively near-term. Trends beyond 2050 should make us even more anxious. I’ll venture a word about these – but a tentative word, because scientists have a rotten record as forecasters. Ernest Rutherford, the greatest nuclear physicist of his time, said in the 1930s that nuclear energy was ‘moonshine’. One of my predecessors as Astronomer Royal said, as late as the 1950s, that space travel was ‘utter bilge’. My own crystal ball is very cloudy.

In the latter part of the 21st century the world will be warmer and more crowded – that’s one of the few confident predictions.. But we can’t predict how our lives might then have been changed by novel technologies. After all, the speedy societal transformation brought about by the smartphone, the internet and their ancillaries would have seemed magic even 20 years ago. So, looking several decades ahead we must keep our minds open, or at least ajar, to prospects that may now seem science fiction.

The physicist Freeman Dyson foresees a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. I’d guess that this is comfortably beyond the ‘SF fringe’, but were even part of this scenario to come about, our ecology (and even our species) surely would not long survive unscathed.

But what about another fast-advancing technology: robotics and machine intelligence? Even back in the 1990s IBM’s ‘Deep Blue’ beat Kasparov, the world chess champion. More recently ‘Watson’ won a TV game show. Maybe a new-generation  ‘hyper computer’ could achieve oracular powers that offered its controller dominance of international finance and strategy.

Advances in software and sensors have been slower than in number-crunching capacity. Robots still can’t match the facility of a child in recognising and moving the pieces on a real chessboard. They can’t tie your shoelaces or cut your toenails. But machine learning and sensor technology are advancing apace. If robots could  observe and interpret their environment as adeptly as we do they would truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we to other people. And their greater processing speed may give them an advantage over us.

But will robots remain docile rather than ‘going rogue’? And what if a hyper-computer developed a mind of its own? If it could infiltrate the internet – and the internet of things – it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes – or even treat humans as an encumbrance.

Indeed, as early as the 1960s the British mathematician I J Good pointed out that a  super-intelligent robot (were it sufficiently versatile) could be the last invention that humans need ever make. Once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones.

Ray Kurzweil, now working at Google, is the leading evangelist for this so-called ‘singularity’. He thinks that humans could transcend biology by merging with computers, maybe losing their individuality and evolving into a common consciousness. In old-style spiritualist parlance, they would ‘go over to the other side’. But he’s worried that it may not happen in his lifetime. So he wants his body frozen until this nirvana is reached. I was once interviewed by a group of ‘cryonic’ enthusiasts –  in California (where else!)–  called the ‘society for the abolition of involuntary death’. They will freeze your body, so that when immortality’s on offer you can be resurrected. I said I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a ‘deathist’. (I was surprised to find that three Oxford professors were Cryonic enthusiasts. Two had paid the full whack; a third has taken the cut-price option of just having his head frozen).

In regard to all these speculations, we don’t know where the boundary lies between what may happen, and what will remain science fiction –  just as we don’t know whether to take seriously Freeman Dyson’s vision of bio-hacking by children. There are widely divergent views. Some experts, for instance Stuart Russell at Berkeley, and Demis Hassabis of Deep Mind think that the AI field, like synthetic biotech, already needs guidelines for ‘responsible innovation’. But others, like Rodney Brooks, think these concerns are ‘misguided’, and too far from realization to be worth worrying about. And the whole concept is philosophically contentious. – John Searle has an article in a recent NYRB dismissing the entire concept that a machine could have a mind of its own.

Be that as it may, it’s likely that before 2100, our society and its economy will be transformed by autonomous robots, even though these may be ‘idiot savants’ rather than displaying full human capabilities.

[Books like The Second Machine Age have addressed the economic and social disruption that will ensure when robots replace not just factory workers, but white-collar workers as well (even lawyers are under threat!).]

A short digression:

One context where robots surely have a future is in space. In the second part of this century the whole solar system will be explored by flotillas of miniaturized robots. And, on a larger scale, robotic fabricators may build vast lightweight structures floating in space (solar energy collectors, for instance), perhaps mining raw materials from asteroids.

These robotic advances will erode the practical case for human spaceflight. Nonetheless, I hope people will follow the robots, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. For instance SpaceX, led by Elon Musk, who also makes Tesla electric cars, has launched unmanned payloads and docked with the Space Station. He hopes soon to offer orbital flights to paying customers. Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon – voyaging further from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). I’m told they’ve sold a ticket for the second flight but not for the first flight. We should surely cheer on these private enterprise efforts in space – they can tolerate higher risks than a western government could impose on publicly-funded civilians, and thereby cut costs.

By 2100, groups of pioneers may have established ‘bases’ independent from the Earth – on Mars, or maybe on asteroids. Whatever ethical constraints we impose here on the ground, we should surely wish these adventurers good luck in using all the resources of genetic and cyborg technology to adapt themselves and their progeny to alien environments. This might be the first step towards divergence into a new species: the beginning of the post-human era. And it would also ensure that advanced life would survive, even if the worst conceivable catastrophe befell our planet.

But don’t ever expect mass emigration from Earth. Nowhere in our Solar system offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth’s problems.

And here on Earth we may indeed have a bumpy ride through this century. The scenarios I’ve described – environmental degradation, extreme climate change, or unintended consequences of advanced technology –  could trigger serious, even catastrophic, setbacks to our civilization. But they wouldn’t wipe us all out. They’re extreme, but strictly speaking not ‘existential’.

 

Truly existential risks?

Are there conceivable events that could snuff out all life? Promethian concerns of this kind were raised by scientists working on the atomic bomb project during the Second World War. Could we be absolutely sure that a nuclear explosion wouldn’t ignite all the world’s atmosphere or oceans?   Before the Trinity bomb test in New Mexico, Hans Bethe and two colleagues addressed this issue; they convinced themselves that there was a large safety factor. And luckily they were right. We now know for certain that a single nuclear weapon, devastating though it is, can’t trigger a nuclear chain reaction that would utterly destroy the Earth or its atmosphere.

But what about even more extreme experiments? Physicists were (in my view quite rightly) pressured to address the speculative ‘existential risks’ that could be triggered by powerful accelerators in Brookhaven and Geneva that generate unprecedented concentrations of energy.  Could physicists unwittingly convert the entire Earth into particles called ‘strangelets’ – or, even worse, trigger a ‘phase transition’ that would shatter the fabric of space itself?  Fortunately, reassurance could be offered: indeed I was one of those who pointed out that cosmic rays of much higher energies collide o frequently in the Galaxy, but haven’t ripped space apart. And they have penetrated white dwarf and neutron stars without triggering their conversion into ‘strangelets’.

But physicists should surely be circumspect and precautionary about carrying out experiments that generate conditions with no precedent even in the cosmos – just as biologists should avoid release of potentially-devastating genetically-modified pathogens.

So how risk-averse should we be? Some would argue that odds of 10 million to one against an existential disaster would be good enough, because that is below the chance that, within the next year, an asteroid large enough to cause global devastation will hit the Earth. (This is like arguing that the extra carcinogenic effects of artificial radiation are acceptable if it doesn’t so much as double the risk from natural radiation.) But to some, this limit may not seem stringent enough. If there were a threat to the entire Earth, the public might properly demand assurance that the probability is below one in a billion –  even one in a trillion –  before sanctioning such an experiment.

But can we meaningfully give such assurances? We may offer these odds against the Sun not rising tomorrow, or against a fair die giving 100 sixes in a row; that’s because we’re confident that we understand these things. But if our understanding is shaky – as it plainly is at the frontiers of physics –  we can’t really assign a probability, nor confidently assert that something is stupendously unlikely. It’s surely presumptuous to place extreme confidence in any theories about what happens when atoms are smashed together with unprecedented energy. If a congressional committee asked: ‘are you really claiming that there’s less than a one in a billion chance that you’re wrong?’ I’d feel uncomfortable saying yes.

But on the other hand, if a congressman went on to ask: “Could such an experiment disclose a transformative discovery that–  for instance – provided a new source of energy for the world?” I’d again offer high odds against it. The issue is then the relative likelihood of these two unlikely event – one hugely beneficial, the other catastrophic. Innovation is often hazardous, but if we don’t take risks we may forgo disproportionate benefits. Undiluted application of the ‘precautionary principle’ has a manifest downside. There is ‘the hidden cost of saying no’.

And, by the way, the priority that we should assign to avoiding truly existential disasters depends on an ethical question posed by (for instance) the philosopher Derek Parfit, which is this. Consider two scenarios: scenario A wipes out 90 percent of humanity; scenario B wipes out 100 percent. How much worse is B than A? Some would say 10 percent worse: the body count is 10 percent higher. But others would say B was incomparably worse, because human extinction forecloses the existence of billions, even trillions, of future people – and indeed an open-ended post-human future.

Especially if you accept the latter viewpoint, you’ll agree that existential catastrophes deserve more attention. That’s why some of us in (the other) Cambridge – both natural and social scientists –have inaugurated a research programme (the Centre for the Study of Existential Risks) to address these ‘existential’ risks, as well as the wider class of extreme risks I’ve discussed. We need to deploy the best scientific expertise to assess which alleged risks are pure science fiction, and which could conceivably become real; to consider how to enhance resilience against the more credible ones; and to warn against technological developments that could run out of control. . And there are similar efforts elsewhere: at Oxford in the UK here at MIT and in other places.

Moreover, we shouldn’t be complacent that all such probabilities are miniscule. We’ve no grounds for assuming that human-induced threats worse than those on our current risk register are improbable: they are newly emergent, so we have a limited time base for exposure to them and can’t be sanguine that we would survive them for long– nor about the ability of governments to cope if disaster strikes. Indeed we have zero grounds for confidence that we can survive the worst that future technologies could bring in their wake.

Technology bring with it great hopes, but also great fears. We mustn’t forget an important maxim: the unfamiliar is not the same as the improbable.

Another digression:

I’m often asked: is there a special perspective that astronomers can offer to science and philosophy? Having worked among them for many years, I have to tell you that contemplation of vast expanses of space and time doesn’t make astronomers serene and relaxed. They fret about everyday hassles as much as anyone. But they do have one special perspective –  an awareness of an immense future.

The stupendous timespans of the evolutionary past are now part of common culture (outside ‘fundamentalist’ circles, at any rate). But most people still somehow think we humans are the culmination of the evolutionary tree.  That hardly seems credible to an astronomer. Our Sun formed 4.5 billion years ago, but it’s got 6 billion more before the fuel runs out. And the expanding universe will continue –  perhaps forever –  destined to become ever colder, ever emptier. To quote Woody Allen, eternity is very long, especially towards the end.

Posthuman evolution –  here on Earth and far beyond – could  be as prolonged as the Darwinian evolution that’s led to us –  and even more wonderful. Any creatures witnessing the Sun’s demise 6 billion years hence won’t be human –  they’ll be as different from us as we are from a bug. Indeed evolution will be even faster than in the past – on a technological not a natural selection timescale.

Even in this ‘concertinered’ timeline –  extending billions of years into the future, as well as into the past –  this  century may be a defining moment where humans could jeopardise life’s immense potential. That’s why the avoidance of complete extinction has special resonance for an astronomer.

 

Obligations of scientists

Finally, a few thoughts of special relevance to my hosts in STS. Sheila Jasinoff and others have discussed the obligations of scientists when their investigations have potential social, economic and ethical impacts that concern all citizens. These issues are starkly relevant to the theme of this talk. So I’d like, before closing, to offer some thoughts – though with diffidence in front of this audience. It’s important to keep ‘clear water’ between science and policy. Risk assessment should be separate from risk management. Scientists should present policy options based on a consensus of expert opinion; but if they engage in advocacy they should recognise that on the economic, social and ethical aspects of any policy they speak as citizens and not as experts – and will have a variety of views.

I’d highlight some fine exemplars from the past: for instance, the atomic scientists who developed the first nuclear weapons during World War II. Fate had assigned them a pivotal role in history. Many of them – - men such as Jo Rotblat, Hans Bethe, Rudolf Peierls and John Simpson (all of who I was privileged to know in their later years) –  returned with relief to peacetime academic pursuits. But the ivory tower wasn’t, for them, a sanctuary. They continued not just as academics but as engaged citizens – - promoting efforts to control the power they had helped unleash, through national academies, the Pugwash movement, and other bodies.

They were the alchemists of their time, possessors of secret specialized knowledge. The technologies I’ve discussed today have implications just as momentous as nuclear weapons. But in contrast to the ‘atomic scientists’, those engaged with the new challenges  span almost all the sciences, are broadly international – and work in the commercial as well as public sector.

But they all have a responsibility. You would be a poor parent if you didn’t care what happened to your children in adulthood, even though you may have little control over them. Likewise, scientists shouldn’t be indifferent to the fruits of their ideas – their creations.  They should try to foster benign spin-offs – commercial or otherwise. They should resist, so far as they can, dubious or threatening applications of their work, and alert politicians when appropriate. We need to foster a culture of ‘responsible innovation’, especially in fields like biotech, advanced AI and geoengineering.

But, more than that, choices on how technology is applied – what to prioritise, and what to regulate –  require wide public debate, and such debate must be informed and leveraged by ‘scientific citizens’ – who will have a range of political perspectives, They can do this via campaigning groups, via blogging and journalism, or through political activity. There is a role for national academies too.

A special obligation lies on those in academia or self-employed entrepreneurs –  they have more freedom to engage in public debate than those employed in government service or in industry. (Academics have a special privilege to influence students. Polls show, unsurprisingly, that younger people, who expect to survive most of the century, are more engaged and anxious about long-term and global issues – we should respond to their concerns.)

More should be done to assess, and then minimize, the extreme risks I’ve addressed. But though we live under their shadow, there seems no scientific impediment to achieving a sustainable and secure world, where all enjoy a lifestyle better than those in the ‘west’ do today. We can be technological optimists, even though the balance of effort in technology needs redirection – and to be guided by values that science itself can’t provide. But the intractable politics and sociology –  the gap between potentialities and what actually happens –  engenders pessimism. Politicians look to their own voters – and the next election. Stockholders expect a pay-off in the short run. We downplay what’s happening even now in far-away countries. And we discount too heavily the problems we’ll leave for new generations. Without a broader perspective – without realizing that we’re all on this crowded world together – governments won’t properly prioritise projects that are long-term in a political perspectives, even if a mere instant in the history of our planet.

“Space-ship Earth” is hurtling through space. Its passengers are anxious and fractious. Their life-support system is vulnerable to disruption and break-downs. But there is too little planning too little horizon-scanning, too little awareness of long-term risks.

There needs to be a serious research programme, involving natural and social scientists, to compile a more complete register of these ‘extreme risks’, and to enhance resilience against the more credible ones. The stakes are so high that those involved in this effort will have earned their keep even if they reduce the probability of a catastrophe by one in the sixth decimal place.

I’ll close with a reflection on something close to home, Ely Cathedral. This overwhelms us today. But think of its impact  900 years ago –  think of the vast enterprise its construction entailed. Most of its builders had never travelled more than 50 miles. The fens were their world. Even the most educated knew of essentially nothing beyond Europe. They thought the world was a few thousand years old –  and that it might not last another thousand.

But despite these constricted horizons, in both time and space –  despite the deprivation and harshness of their lives –  despite their primitive technology and meagre resources –  they built this huge and glorious building –  pushing the boundaries of what was possible. Those who conceived it knew they wouldn’t live to see it finished. Their legacy still elevates our spirits, nearly a millennium later.

What a contrast to so much of our discourse today! Unlike our forebears, we know a great deal about our world –  and indeed about what lies beyond. Technologies that our ancestors couldn’t have conceived enrich our lives and our understanding. Many phenomena still make us fearful, but the advance of science spares us from irrational dread.  We know that we are stewards of a precious ‘pale blue dot’ in a vast cosmos –  a planet with a future measured in billions of years –  whose fate depends on humanity’s collective actions this century.

But all too often the focus is short term and parochial. We downplay what’s happening even now in impoverished far-away countries. And we give too little thought to what kind of world we’ll leave for our grandchildren.

In today’s runaway world, we can’t aspire to leave a monument lasting a thousand years, but it would surely be shameful if we persisted in policies that denied future generations a fair inheritance and left them with a more depleted and more hazardous world. Wise choices will require the idealistic and effective efforts of natural scientists, environmentalists, social scientists and humanists – all guided by the knowledge that 21st century science can offer. And by values that science alone can’t provide.

But we mustn’t leap from denial to despair. So, having started with H G Wells, I give the final word to another secular sage, the great immunologist Peter Medawar.

“The bells that toll for mankind are like the bells of Alpine cattle. They are attached to our own necks, and it must be our fault if they do not make a tuneful and melodious sound.”

Martin Rees is a Fellow of Trinity College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge. He is also the chair of the Longitude Prize committee, a £10m reward for helping to combat antibiotic resistance, which is now open for submissions. Details here: longitudeprize.org. A version of this lecture was first delivered at the Harvard School of Government on 6 Nov 2014.

JOHN DEVOLLE/GETTY IMAGES
Show Hide image

Fitter, dumber, more productive

How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism.

When Tim Cook unveiled the latest operating system for the Apple Watch in June, he described the product in a remarkable way. This is no longer just a wrist-mounted gadget for checking your email and social media notifications; it is now “the ultimate device for a healthy life”.

With the watch’s fitness-tracking and heart rate-sensor features to the fore, Cook explained how its Activity and Workout apps have been retooled to provide greater “motivation”. A new Breathe app encourages the user to take time out during the day for deep breathing sessions. Oh yes, this watch has an app that notifies you when it’s time to breathe. The paradox is that if you have zero motivation and don’t know when to breathe in the first place, you probably won’t survive long enough to buy an Apple Watch.

The watch and its marketing are emblematic of how the tech trend is moving beyond mere fitness tracking into what might one call quality-of-life tracking and algorithmic hacking of the quality of consciousness. A couple of years ago I road-tested a brainwave-sensing headband, called the Muse, which promises to help you quiet your mind and achieve “focus” by concentrating on your breathing as it provides aural feedback over earphones, in the form of the sound of wind at a beach. I found it turned me, for a while, into a kind of placid zombie with no useful “focus” at all.

A newer product even aims to hack sleep – that productivity wasteland, which, according to the art historian and essayist Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep, is an affront to the foundations of capitalism. So buy an “intelligent sleep mask” called the Neuroon to analyse the quality of your sleep at night and help you perform more productively come morning. “Knowledge is power!” it promises. “Sleep analytics gathers your body’s sleep data and uses it to help you sleep smarter!” (But isn’t one of the great things about sleep that, while you’re asleep, you are perfectly stupid?)

The Neuroon will also help you enjoy technologically assisted “power naps” during the day to combat “lack of energy”, “fatigue”, “mental exhaustion” and “insomnia”. When it comes to quality of sleep, of course, numerous studies suggest that late-night smartphone use is very bad, but if you can’t stop yourself using your phone, at least you can now connect it to a sleep-enhancing gadget.

So comes a brand new wave of devices that encourage users to outsource not only their basic bodily functions but – as with the Apple Watch’s emphasis on providing “motivation” – their very willpower.  These are thrillingly innovative technologies and yet, in the way they encourage us to think about ourselves, they implicitly revive an old and discarded school of ­thinking in psychology. Are we all neo-­behaviourists now?

***

The school of behaviourism arose in the early 20th century out of a virtuous scientific caution. Experimenters wished to avoid anthropomorphising animals such as rats and pigeons by attributing to them mental capacities for belief, reasoning, and so forth. This kind of description seemed woolly and impossible to verify.

The behaviourists discovered that the actions of laboratory animals could, in effect, be predicted and guided by careful “conditioning”, involving stimulus and reinforcement. They then applied Ockham’s razor: there was no reason, they argued, to believe in elaborate mental equipment in a small mammal or bird; at bottom, all behaviour was just a response to external stimulus. The idea that a rat had a complex mentality was an unnecessary hypothesis and so could be discarded. The psychologist John B Watson declared in 1913 that behaviour, and behaviour alone, should be the whole subject matter of psychology: to project “psychical” attributes on to animals, he and his followers thought, was not permissible.

The problem with Ockham’s razor, though, is that sometimes it is difficult to know when to stop cutting. And so more radical behaviourists sought to apply the same lesson to human beings. What you and I think of as thinking was, for radical behaviourists such as the Yale psychologist Clark L Hull, just another pattern of conditioned reflexes. A human being was merely a more complex knot of stimulus responses than a pigeon. Once perfected, some scientists believed, behaviourist science would supply a reliable method to “predict and control” the behaviour of human beings, and thus all social problems would be overcome.

It was a kind of optimistic, progressive version of Nineteen Eighty-Four. But it fell sharply from favour after the 1960s, and the subsequent “cognitive revolution” in psychology emphasised the causal role of conscious thinking. What became cognitive behavioural therapy, for instance, owed its impressive clinical success to focusing on a person’s cognition – the thoughts and the beliefs that radical behaviourism treated as mythical. As CBT’s name suggests, however, it mixes cognitive strategies (analyse one’s thoughts in order to break destructive patterns) with behavioural techniques (act a certain way so as to affect one’s feelings). And the deliberate conditioning of behaviour is still a valuable technique outside the therapy room.

The effective “behavioural modification programme” first publicised by Weight Watchers in the 1970s is based on reinforcement and support techniques suggested by the behaviourist school. Recent research suggests that clever conditioning – associating the taking of a medicine with a certain smell – can boost the body’s immune response later when a patient detects the smell, even without a dose of medicine.

Radical behaviourism that denies a subject’s consciousness and agency, however, is now completely dead as a science. Yet it is being smuggled back into the mainstream by the latest life-enhancing gadgets from Silicon Valley. The difference is that, now, we are encouraged to outsource the “prediction and control” of our own behaviour not to a benign team of psychological experts, but to algorithms.

It begins with measurement and analysis of bodily data using wearable instruments such as Fitbit wristbands, the first wave of which came under the rubric of the “quantified self”. (The Victorian polymath and founder of eugenics, Francis Galton, asked: “When shall we have anthropometric laboratories, where a man may, when he pleases, get himself and his children weighed, measured, and rightly photographed, and have their bodily faculties tested by the best methods known to modern science?” He has his answer: one may now wear such laboratories about one’s person.) But simply recording and hoarding data is of limited use. To adapt what Marx said about philosophers: the sensors only interpret the body, in various ways; the point is to change it.

And the new technology offers to help with precisely that, offering such externally applied “motivation” as the Apple Watch. So the reasoning, striving mind is vacated (perhaps with the help of a mindfulness app) and usurped by a cybernetic system to optimise the organism’s functioning. Electronic stimulus produces a physiological response, as in the behaviourist laboratory. The human being herself just needs to get out of the way. The customer of such devices is merely an opaquely functioning machine to be tinkered with. The desired outputs can be invoked by the correct inputs from a technological prosthesis. Our physical behaviour and even our moods are manipulated by algorithmic number-crunching in corporate data farms, and, as a result, we may dream of becoming fitter, happier and more productive.

***

 

The broad current of behaviourism was not homogeneous in its theories, and nor are its modern technological avatars. The physiologist Ivan Pavlov induced dogs to salivate at the sound of a bell, which they had learned to associate with food. Here, stimulus (the bell) produces an involuntary response (salivation). This is called “classical conditioning”, and it is advertised as the scientific mechanism behind a new device called the Pavlok, a wristband that delivers mild electric shocks to the user in order, so it promises, to help break bad habits such as overeating or smoking.

The explicit behaviourist-revival sell here is interesting, though it is arguably predicated on the wrong kind of conditioning. In classical conditioning, the stimulus evokes the response; but the Pavlok’s painful electric shock is a stimulus that comes after a (voluntary) action. This is what the psychologist who became the best-known behaviourist theoretician, B F Skinner, called “operant conditioning”.

By associating certain actions with positive or negative reinforcement, an animal is led to change its behaviour. The user of a Pavlok treats herself, too, just like an animal, helplessly suffering the gadget’s painful negative reinforcement. “Pavlok associates a mild zap with your bad habit,” its marketing material promises, “training your brain to stop liking the habit.” The use of the word “brain” instead of “mind” here is revealing. The Pavlok user is encouraged to bypass her reflective faculties and perform pain-led conditioning directly on her grey matter, in order to get from it the behaviour that she prefers. And so modern behaviourist technologies act as though the cognitive revolution in psychology never happened, encouraging us to believe that thinking just gets in the way.

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

Modern anti-distraction tools such as computer software that disables your internet connection, or word processors that imitate an old-fashioned DOS screen, with nothing but green text on a black background, as well as the brain-measuring Muse headband – these are just the latest versions of what seems an age-old desire for technologically imposed calm. But what do we lose if we come to rely on such gadgets, unable to impose calm on ourselves? What do we become when we need machines to motivate us?

***

It was B F Skinner who supplied what became the paradigmatic image of ­behaviourist science with his “Skinner Box”, formally known as an “operant conditioning chamber”. Skinner Boxes come in different flavours but a classic example is a box with an electrified floor and two levers. A rat is trapped in the box and must press the correct lever when a certain light comes on. If the rat gets it right, food is delivered. If the rat presses the wrong lever, it receives a painful electric shock through the booby-trapped floor. The rat soon learns to press the right lever all the time. But if the levers’ functions are changed unpredictably by the experimenters, the rat becomes confused, withdrawn and depressed.

Skinner Boxes have been used with success not only on rats but on birds and primates, too. So what, after all, are we doing if we sign up to technologically enhanced self-improvement through gadgets and apps? As we manipulate our screens for ­reassurance and encouragement, or wince at a painful failure to be better today than we were yesterday, we are treating ourselves similarly as objects to be improved through operant conditioning. We are climbing willingly into a virtual Skinner Box.

As Carl Cederström and André Spicer point out in their book The Wellness Syndrome, published last year: “Surrendering to an authoritarian agency, which is not just telling you what to do, but also handing out rewards and punishments to shape your behaviour more effectively, seems like undermining your own agency and autonomy.” What’s worse is that, increasingly, we will have no choice in the matter anyway. Gernsback’s Isolator was explicitly designed to improve the concentration of the “worker”, and so are its digital-age descendants. Corporate employee “wellness” programmes increasingly encourage or even mandate the use of fitness trackers and other behavioural gadgets in order to ensure an ideally efficient and compliant workforce.

There are many political reasons to resist the pitiless transfer of responsibility for well-being on to the individual in this way. And, in such cases, it is important to point out that the new idea is a repackaging of a controversial old idea, because that challenges its proponents to defend it explicitly. The Apple Watch and its cousins promise an utterly novel form of technologically enhanced self-mastery. But it is also merely the latest way in which modernity invites us to perform operant conditioning on ourselves, to cleanse away anxiety and dissatisfaction and become more streamlined citizen-consumers. Perhaps we will decide, after all, that tech-powered behaviourism is good. But we should know what we are arguing about. The rethinking should take place out in the open.

In 1987, three years before he died, B F Skinner published a scholarly paper entitled Whatever Happened to Psychology as the Science of Behaviour?, reiterating his now-unfashionable arguments against psychological talk about states of mind. For him, the “prediction and control” of behaviour was not merely a theoretical preference; it was a necessity for global social justice. “To feed the hungry and clothe the naked are ­remedial acts,” he wrote. “We can easily see what is wrong and what needs to be done. It is much harder to see and do something about the fact that world agriculture must feed and clothe billions of people, most of them yet unborn. It is not enough to advise people how to behave in ways that will make a future possible; they must be given effective reasons for behaving in those ways, and that means effective contingencies of reinforcement now.” In other words, mere arguments won’t equip the world to support an increasing population; strategies of behavioural control must be designed for the good of all.

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

This article first appeared in the 18 August 2016 issue of the New Statesman, Corbyn’s revenge