Why can’t scientists warn us about earthquakes such as the one that struck in Gujarat, western India, killing possibly as many as 100,000 people? Why is it – if astronomers can see nearly to the edge of the universe and biologists can clone living organisms – that the science of geophysics cannot tell us when and where the earth will start shaking?
What is most peculiar about this situation is that the basic earthquake process is conceptually simple. The continental plates are great fragments of the earth’s crust which float on a liquid mantle like gigantic rafts. Wherever two of these rub shoulders – as they do all along the San Andreas fault in California, for instance – they tend to stick together. But slowly, as continents drift, the rocks get twisted out of shape and, when the stress builds up beyond a certain threshold, something finally gives and there is an earthquake.
So, it seems, it shouldn’t be too difficult to tell where and when the big events will take place. This isn’t quantum physics. Even so, the tragedy in India reminds us that the record of earthquake prediction research is truly dismal. There has never been a single unambiguous success. And there have been many notable failures.
In 1976, for example, a researcher with the US Bureau of Mines predicted that two enormous quakes of magnitude 9.8 and 8.8 on the Richter scale would strike off the coast of Peru in August 1981 and May 1982. He also predicted a foreshock of magnitude 7.5 to 8 for June 1981. When the foreshock didn’t happen, he retracted his prediction, but the Peruvian government was thrown into such a scare that an official of the US Geological Survey had to travel there to calm fears.
Also in the late 1970s, Japanese scientists became convinced that a great quake was soon to hit central Japan. In the past, earthquakes had occurred in the region at roughly 120-year intervals. As more than 120 years had passed since the last quake, they reasoned that another was imminent. A vast emergency response system was duly put in place and yet, today, 25 years later, the quake has still not arrived. The 1995 Kobe quake, in fact, struck in an area of Japan where scientists thought the risk was low.
In 1995, the chairman of the geology department at the University of Southern California predicted that a large quake would rock central California in the spring or early summer of that year. It never happened.
There are countless examples of similar failures. Robert Geller of Tokyo University, one of the world’s foremost earthquake experts, wrote recently: “Earthquake prediction research has been conducted for over 100 years with no obvious successes. Claims of breakthroughs have failed to withstand scrutiny. Extensive searches have failed to find reliable precursors . . . reliable issuing of alarms of imminent large earthquakes appears to be effectively impossible.”
Can an area of research even be considered to be scientific if it cannot make predictions? As the poet Paul Valery saw things, “‘Science’ means simply the aggregate of all the recipes that are always successful.” And to that he added, “All the rest is literature.” Does earthquake science belong to literature? Should earthquake researchers throw in the towel and work on something else? In an online debate hosted by the science journal Nature in 1999 (www.nature.com), one geophysicist suggested that earthquake prediction is “the alchemy of our times”, a topic that, despite its practical impossibility, is “fatally attractive to both scientists and the general public”.
Over the past decade, physicists have discovered that systems as diverse as a pile of sand, the earth’s crust and its ecosystems, and even our financial markets seem to have a tendency to “self-organise” into what is known as a “critical state”. This is a natural condition of extreme instability in which the system remains always poised on the edge of sudden, radical change. It is, in a way, tuned so as to be hypersensitive to even the tiniest of influences. In such a setting it becomes next to impossible to predict what will happen next.
The statistical distribution of earthquakes – the raw numbers for how many small, intermediate and large quakes take place – follows a mathematical pattern consistent with this idea. What’s more, researchers have discovered that some of the basic models which geophysicists use to mimic the earthquake process also fall into this category.
The idea suggests that the pattern of stresses and strains in the earth’s crust is not random, but possesses an intricate organisation which makes it extremely difficult to foresee the ultimate consequences of, say, a tiny increase in stress in just one place. That change might do nothing more than bend and compress the rocks a bit more. Or it might set off a small quake – a sequential slipping of rocks along a fault that would carry on for a short distance. It could, on the other hand, set off a chain reaction of movement going much further and resulting in a major quake.
If this idea is correct, then highly intricate features of the global organisation of stress and strain within the earth will determine where and when the next great quake will take place. This implies that there may be no identifiable details in the crust from which scientists could hope to “read out” the future, and that no amount of data gathering and computation will ever be enough to make a prediction even of the rough timing or magnitude of the next earthquake.
As one geophysicist put it, an earthquake when it starts out “does not know how big it will ultimately become”. And if it doesn’t know, neither can we.
This view suggests that it may be wise for geophysicists to work in a different way. Even if one cannot make predictions, this does not imply that there is no regularity at all in the earthquake process.
We know, for instance, that earthquakes cluster along those places where continental plates meet. This leads to the uninspired conclusion that it is more hazardous, as far as earthquakes are concerned, to live in San Francisco than in London. But quakes also cluster in time, and this teaches a more surprising lesson. It is a seemingly intuitive idea that if an area suffers no earthquake for a long time, then it is “overdue” for a quake and should suffer one soon. But this flies in the face of the actual statistics. On the contrary, the longer a region goes without a quake, the less likely it is to see one soon.
These ideas suggest that we may be able to benefit from something less ambitious – predicting not the placement and location of specific quakes, but the likelihood of having a quake of a certain size in a certain zone over a certain number of years. These aren’t the kinds of predictions that grab headlines but they may be enough to inform building codes and to establish good procedures for emergency response in areas that are indeed subject to hazard.
Friedrich Nietzsche put his finger on the psychology that makes us long to find readily identifiable causes for catastrophes: “To trace something unknown back to something known is alleviating, soothing, gratifying, and gives moreover a feeling of power. Danger, disquiet, anxiety attend the unknown, and the first instinct is to eliminate these distressing states. First principle: any explanation is better than none . . . The cause-creating drive is thus conditioned and excited by the feeling of fear . . .”
As the tragedy in India has once again emphasised, however, there may be no simple answers; indeed, there is good evidence that predicting the circumstances of catastrophes may in many instances be strictly impossible. Unforeseen catastrophes are a grim feature of reality, and they may be with us always.
The writer is a physicist and former journalist for Nature and the New Scientist. His book Ubiquity: the science of history . . . or why the world is simpler than we think was published last September by Weidenfeld & Nicolson (£20)