In January 2003 a seafood merchant suffering a respiratory crisis was admitted to hospital in Guangzhou in southern China. As he coughed and spluttered, he infected dozens of hospital staff. One of them, a doctor, had flu-like symptoms, but recovered in time to travel to his nephew’s wedding in Hong Kong. In room 911 of the Metropole Hotel, he fell sick again, and the disease spread along the corridor.
As other guests left the hotel, they carried the disease with them. One, a young woman, flew home to Singapore, where she was admitted to hospital on 1 March. Doctors noticed that several of her visitors returned as patients. When nurses came down with the same symptoms, the hospital realised it had a dangerous problem on its hands.
Singapore moved swiftly to control the virus that became known as severe acute respiratory syndrome (Sars). All patients with symptoms that even remotely resembled Sars were taken into isolation and healthcare workers were provided with personal protective equipment and required to check themselves for symptoms three times a day. On 27 March, schools closed.
Investigators traced contacts of each new Sars patient within 24 hours and those contacts were instructed to quarantine themselves. They were handed a phone and a camera and told to be ready to video themselves at home whenever an official called. Despite these measures, by 24 April, 22 people had died. Penalties for quarantine-breakers were stiffened, airport screening was introduced, taxi-drivers had their temperature checked daily. On 13 July, the last Sars patient was discharged from hospital.
Sars killed about 800 people worldwide (one in ten who were infected), mostly in China and its neighbours. To the extent that the rest of the world noticed, the disease was thought to have faded away. Those who fought the outbreak at close quarters, however, knew that Sars didn’t burn out of its own accord. It had to be stopped in its tracks.
In industries that have to be vigilant for risks of disaster, such as aviation or nuclear energy, “near misses” are treated as flashing red lights. When a plane almost misses its landing or a factory explosion is narrowly averted, investigations are made, processes revised: just because the disaster did not occur it does not mean it won’t next time.
But near misses can also breed complacency. We have a tendency, identified by those who study the psychology of risk, to treat near misses as grounds for optimism. Since the worst didn’t happen, people become strengthened in their conviction that it won’t ever happen. A recent Norwegian study of traffic behaviour found that drivers who had experienced near accidents were subsequently more willing to take risks. At the level of organisations, close calls are sometimes taken as reasons to stick with existing procedures. The organisational theorists Junko Shimazoe and Richard Burton call this the “justification shift”.
In meetings just before the launch of the Nasa Space Shuttle Challenger in 1986, some engineers advised against proceeding. They were worried about the effect of the very cold weather on the O-rings, which sealed off the gas in the rocket boosters. Other decision-makers pointed to past shuttle launches, including ones made in cold weather, to justify their confidence. The believers in delay argued that the cold-weather launches were near misses that showed not that the current standards were safe, merely that they had worked so far. The pro-launch camp won the day. Challenger exploded 73 seconds after launch, killing seven astronauts. An inquiry found that it was caused by a breach in the O-rings.
To learn from a near miss, you first have to recognise it as one. In the past 20 years, there have been a series of viral outbreaks: Sars in 2002-03, H5N1 (bird flu) in 2006, H1N1 (swine flu) in 2009, Ebola in 2013, Mers in 2015. Each briefly threatened to become a pandemic, before subsiding. Western governments took this to mean that Covid-19 would go the same way. If Singapore, China and Taiwan were better prepared for this virus than the UK, it’s because officials there knew, in their bones, that those outbreaks might have wreaked far greater damage.
With the Challenger launch, according to Shimazoe and Burton, the clear difference between those who remained tenacious proponents of delay and those who became pro-launch was not seniority or expertise, but “psychological distance” from the problem. The engineers who had studied O-rings in detail never changed their opinion that the risk was unacceptable.
Western governments, including our own, knew about Sars and Mers but were psychologically distant from problems that afflicted countries far away. That represented a failure of imagination. As the pandemic expert Ali Khan, who worked to fight Sars in Singapore, told the New Yorker, “A disease anywhere is a disease everywhere.”
Learning from near misses also means accepting that just because a risk can’t be measured does not mean it is not real. Nasa’s sceptics lost the debate because of their inability to quantify the risk.
Since there hadn’t been any crashes to date, the advocates of delay didn’t have the data to overcome the burden of proof. Western governments acted late on Covid-19 in part because, without a visceral intuition of danger, they coolly awaited more information. It took the steeply rising death tolls of near neighbours to jolt them into action.
When we make inquiries into the management of this crisis, I worry that we will learn only how we should have fought Covid-19, rather than how to deal with the next outbreak. Awful as this virus is, I wonder what we would be going through now if it was even more transmissible, or if it was killing children in their thousands. We should confront the possibility that what we are experiencing now is itself a near miss.
This article appears in the 27 May 2020 issue of the New Statesman, The peak