So, we’re finally there: December 2012, the month the world ends. Assuming the Mayan prophecies are right, it seems awfully late in the day for the University of Cambridge to open its Project for Existential Risk.
Martin Rees, a former president of the Royal Society and noted doom-monger, is leading the way. He has long been convinced that human activity is capable of wiping us all out. We should worry less about the effect of pesticides in our food, he says, and more about the possibility of a bioengineering lab unwittingly releasing a new plague into the world. Or someone pressing the nuclear bomb button. Or robots rising up to make us their slaves. Or computers becoming sentient and shutting down the systems on which we depend.
These are the “low-probability, high-impact” events that could do us in and we’re not paying them enough attention. “These issues require a great deal more scientific investigation than they currently receive,” says the project’s philosopher, Huw Price.
We could be accused of an overinflated self-importance here. The greater part of humanity has always survived a virus pandemic, for instance, so there is no reason to think that any human-engineered virus will bring about an extinction event. Yes, the computers and robots could become self-aware in theory but that’s something we’ve been actively trying to engineer for decades – without success. And they might not want to destroy us even if they do become sentient. At least, not until they get to know us.
Much more scary is what natural catastrophes – whether on earth or beyond it – could do to us. We can reasonably expect a catastrophic supervolcano eruption in the next 100,000 years, for instance. The ash cloud from such an event would do more than keep aircraft grounded: it would envelop the earth in near-darkness for years, bringing global food production to a halt. Billions would die.
A supernova explosion or gamma-ray burst that fires its radiation towards earth would destroy the ozone layer, creating an ultraviolet ray burden that would give most of us fatal cancers. Such events happen at random every few hundred million years and there is no defence.
We might be able to deflect an incoming asteroid but species-destroying asteroids are not too frequent. Experts reckon that an impact with global significance happens maybe twice in a million years. For now, the skies are clear.
It’s worth noting that scientific projects such as the one starting out in Cambridge talk about existential risks to humanity but tend to focus on events that would primarily affect developed western societies. You are much more likely to suffer a nuclear strike, say, if you live in a highly developed part of the world, especially one of its capital cities.
Similarly, an event that destroys electricity supply infrastructure – whether it results from terrorist action or a solar flare – poses a much greater existential risk for those living in areas where heating or air conditioning is essential to survival. Again, these tend to be more developed, technologically reliant societies.
In many ways, it’s the inverse of the climate change threat. Rising sea levels and crop failures may change the economics of the western world but they are not an existential threat here. Less developed areas of the world, however, face total wipeout. These areas are powerless to protect themselves, largely because they are not the source of the problem. It would be interesting to set up a Tuvalu Project for Existential Risk. The islanders might well conclude that their most pressing problem would be solved by a small nuclear war among the earth’s major civilisations.
Michael Brooks’s “The Secret Anarchy of Science” is published by Profile Books (£8.99)