Why the UK government’s approach to coronavirus modelling is dangerous

In a moment as uncertain as this, mathematical models must not be allowed to stifle human judgement or blur lines of responsibility.

Sign Up

Get the New Statesman's Morning Call email.

Human history can be told through the history of pandemics. The Black Death of 1347, for instance, killed more than half of Europe’s 80 million population. The Venetians responded by forcing sailors to isolate on their ships for 40 days, a quarantino. For the next three centuries, the plague returned to London every 20 years, killing one in five Londoners each time.

Since then, humans have improved their chances of surviving pandemics. Eighteenth-century advances in vaccine technology – after Edward Jenner discovered in 1796 that injecting his gardener’s son with cowpox inoculated him against smallpox – started protecting people from previously infectious diseases. The development of public health in the 19th century – a result of the physician John Snow’s work in tracing the source of London’s 1854 cholera outbreak – led to improvements in national hygiene. And we now have data, which allows for the rapid detection of viral outbreaks. Since the end of February 2020, we have tracked the spread of coronavirus across the world in real-time. Anyone can examine the number of cases and fatalities almost anywhere on Earth. 

The British government wants to harness the power of data. Boris Johnson and Patrick Vallance, the chief scientific officer, have stressed that their approach to stopping the spread of Covid-19 is driven by “mathematical modelling”. This is a risky strategy. While mathematical models can be useful in mapping contagions, data, number crunching and statistics can also dazzle and obscure reality. In a moment as uncertain as the one we are living through, models must not be allowed to stifle human judgement or blur the lines of responsibility.

Unlike winter flu, which is a yearly occurrence, the world has never faced a pandemic of coronavirus. Modelling unprecedented events is difficult, since it requires deep epidemiological uncertainties to be quantified into specific, numerical assumptions. Public science also depends not just on cogent modelling, but on the hard work of communication: from scientists to politicians to the public. Whether unwittingly or deliberately, the uncertainties of the present moment – about the virus itself and the social and economic fallout from the pandemic – have been lost in translation.

The aim of the government’s model, which was recently elaborated in a report by Imperial College's Covid-19 response team, is to simulate the effect of different policies on the spread of coronavirus and the subsequent demands on the NHS. The model depends on striking assumptions about disease progression. Of those aged between 50 and 59 who contract the virus, the government expects one in ten to need hospitalisation, of whom one in eight will require critical care. Of those aged 60 to 69, one in six will require hospitalisation, of whom around a quarter will require critical care. Almost half of those above 70 who contract the virus will need to go to hospital; a little over half of those will require critical care. These are sombre forecasts. 

The model also projects a number of possible futures as a result of the pandemic: tens of thousands to hundreds of thousands of deaths; a 20 per cent to 80 per cent infection rate; an NHS that can tolerate historic levels of demand as opposed to one that cannot; and an economy robust enough to survive what might become the most serious depression for a century. Anyone who professes to have a clear sense of how events will unfold over the coming months does not understand the challenge we face.

There are also four areas of uncertainty that the model must account for. The first is the “reproduction number”, R. This is the number of people that a person with Covid-19 is expected to infect. The government’s model assumes each individual who contracts coronavirus will infect another 2.4 people. The goal is to reduce that number as much as possible. When the reproduction number falls below 1, the virus will begin to die out.  

Small changes in this number are a matter of life and death (the difference between infection rates of 2.3 and 2.5 could result in thousands of fewer fatalities). How fast and how widely the virus spreads affects how many people need ventilators at particular moments in particular hospitals. Indeed, time and place is everything: when and where someone contracts the virus determines whether they will have access to the medical care and equipment they need.

The second uncertainty is that we don’t know how effective behavioural interventions such as social distancing will be in reducing the reproduction rate. Behavioural science is smart guesswork. When journalists report that 80 per cent of the UK could be infected, or that 510,000 could die, this assumes no behavioural change. Equally, when the government suggests deaths could be reduced to 20,000, this assumes everyone enacts proper social distancing. We are at a point now in which the behaviour of young and middle-aged people who think themselves invulnerable will determine how many others die.  

We also know very little about coronavirus itself. The government’s model assumes those who catch it will be immune in the short-term. But we do not know if that is true. Nor do we know how long any immunity would last: a few months, a few years, forever? And we still do not know how any increase in temperature will affect transmission. Summer may bring no reprieve at all.

Finally, the data upon which the model is based is patchy. Testing in the UK (and the US) has been woefully inadequate. Some will have had coronavirus, but will never know whether they are immune because they have not been tested. At present, data lags at least a few weeks behind reality. As testing increases, rising case numbers will reflect not just more people contracting the virus, but better testing. That is a good thing.

Modelling coronavirus requires each of these unknowns to be quantified. This does not mean that models based on uncertain data are completely ineffectual: they can still tell us something about magnitude (how much of an effect a particular policy will have), directionality (whether a policy will make things better or worse), and where and when particular pinch points might be (London in two weeks vs Cardiff in a month). But governments should be wary about leaning too heavily on models, and the Conservative government’s initial laissez-faire approach may become a fatal and enduring lesson in how not to make decisions by trying to quantify deep uncertainty.

All of us must face up to these uncertain times, because there is so much we do not know about rates of infection, immunity, the impact of seasonal changes and how much difference behavioural interventions will make. Data is a gift – without it, far more people would die from what will be the most significant pandemic for a century. 

But we must not let data supplant the role of judgement. When nobody knows what the coming months will hold, or how many will fall sick and die, judgment matters more than ever. What is important are empathy and solidarity, the sense that your choices affect others, and the willingness to take responsibility for the approach you choose. Models cannot do that. Only people can.

Joshua Simons is a EJ Safra Fellow in ethics and an affiliate at the Berkman Klein Centre at Harvard University, a former research scientist at Facebook and former policy adviser for the Labour Party

Free trial CSS