From the outset of the pandemic, most countries have employed time-honoured public health methods: quarantining; tracing and isolating contacts; imposing lockdowns; and controlling borders. But a few nations, the UK included, have taken strikingly divergent paths; none of them has emerged well.
It is not only about past experience. Yes, the countries principally affected by previous deadly coronaviruses such as Sars and Mers have all combated Covid-19 effectively. But so too have nations such as Denmark, Germany and New Zealand, all as inexperienced in tackling novel pathogens as the UK. What explains our deficiency? More than 40,000 people have now died from Covid-19 in Britain. Political failure undoubtedly played its part, but I will leave others to dissect that. Our response to Covid-19 has also exposed serious weaknesses in the UK’s governmental scientific advisory process.
To be clear: the scientists who have contributed to Sage and its various subcommittees have universally acted in good faith. But something structural has gone very wrong. There are numerous disciplines that can inform political decision-making, but the UK’s advice has been dominated by just two: theoretical epidemiological modelling and behavioural science, which have been synthesised to provide scenarios to inform policy. Both are important fields that have contributions to make, but they have been insufficiently checked and balanced by other perspectives, with disastrous results.
The output from mathematical modelling can look very beguiling. Confident graphs seem to predict what is going to happen. Different coloured lines illustrate how outcomes will be mitigated by introducing various control measures. There are even alternative trajectories according to the degree to which the population adheres to what is being imposed: 80 per cent compliance, 60 per cent and so on. The effect feels like scientific soothsaying, assuaging our desire to be able to predict the future and, even better, to control it. Hard, I imagine, for a politician – daunted by the scale of the crisis, and grateful for a road map – to resist.
Yet the UK’s overconfidence in theoretical modelling has several times been rudely exposed. Back on 12 March, when we officially abandoned attempts to contain Covid-19 and moved instead to try to delay and diminish its first wave, we entered the “hosepipe phase”, something no country has previously attempted. It is important to appreciate that this was predicated on the belief that Covid-19 was no longer containable. It was supposed to work like this. While case numbers – analogous to water pressure in a hose – were low, a light squeeze on the pipe would control the flow. These were the frequent-hand-washing and isolate-if-symptomatic measures that constituted the initial government messaging. As case numbers rose – as the water pressure got progressively higher – the grip on the hose would have to clamp tighter and tighter: hence bans on mass gatherings, school and business closures, two-metre social distancing and full lockdown. The ambition was to ride out the first wave, regulating it by fine-tuning which measures were recommended and when, keeping case numbers within NHS capacity while minimising social and economic impacts.
To attempt a feat as untried and ambitious as this would require modelling to predict in real time how the disease was affecting the population, and accurately describe how various control measures would modify that. But during the “hosepipe phase”, the modelling first underestimated the penetration of Covid-19 into the population, allowing a surge to build, then grossly overestimated the height of the peak, provoking the panicked drive for extra bed capacity that saw coronavirus pushed from hospitals into care homes. The modelling also discounted the control measures thought to have proved most efficacious – school closures and banning mass gatherings.
The reason for this wild susceptibility to error is the exquisite sensitivity of theoretical modelling to small variations in input data. The input data are effectively how the virus behaves in the real world. And the problem with a novel pathogen like Covid-19 is that no one actually knows. This doesn’t invalidate modelling as a discipline. But we should have had strong voices around the table questioning the wisdom of building an entire policy on data that could simply not be accurately known.
What about that belief that by 12 March, Covid-19 could no longer be contained? To begin with, we didn’t seek to stop new infections entering the country – a decision informed by modelling designed for a different pathogen, flu. So in came coronavirus: not only from China but also from European states like Italy, Spain, France and Germany that were not deemed a risk at that time. Any case-finding we were attempting had highly restrictive qualifying criteria. For every infected person identified, there were numerous others going undetected simply because they either had the “wrong” travel history, or their symptoms didn’t match our rigid definition of Covid-19. The result was that the virus was rapidly seeded throughout the country, and community transmission quickly became sustained, probably at least a month before 12 March.
Even so, had we locked down at that stage, as many other countries did, we would have brought it under control again with far fewer deaths. But the predictions from behavioural science were that locking down so early could not be maintained, something that experience has subsequently shown to have been wrong.
The formation of independent Sage was a reaction to this fatal imbalance in the scientific advisory process. But the problem needs fixing at its source, in the lull before any second wave.
Read more from this week’s special issue: “Anatomy of a Crisis: How the government failed us over coronavirus”
This article appears in the 01 Jul 2020 issue of the New Statesman, Anatomy of a crisis