This is a metaphor. Photograph: Getty Images
Show Hide image

Your brain on pseudoscience: the rise of popular neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat?

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Shades of grey

The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Nor do we have the faintest clue about the biggest mystery of all – how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph? How come the brain gives rise to the mind? No one knows.

So, instead, here is a recipe for writing a hit popular brain book. You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril. You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair. You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being. Voilà, a laboratory-sanctioned Big Idea in digestible narrative form. This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell. A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer. The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls and zany overinterpretation of research findings in neuroscience and psychology. Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless”, but such garbage needs to be denounced precisely in defence of the achievements of science. (In a sense, as Paul Fletcher points out, such books are “anti science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.) More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker. To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit. Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals. Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others. (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.) The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled “Smart Thinking”, stocked with pop brain tracts. The true function of such books, of course, is to free readers from the responsibility of thinking for themselves. This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning. “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy. I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC. You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want. In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day. Our own brain-as-computer metaphor has been around for decades: there is the “hardware”, made up of different physical parts (the brain), and the “software”, processing routines that use different neuronal “circuits”. Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink – whose motivational selfhelp slogan is that “we can control rapid cognition” – burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software”, though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.) But these writers tend to reach for just one functional story about a brain subsystem – the story that fits with their Big Idea – while ignoring other roles the same system might play. This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition”, as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine? (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.) Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathised, and felt the other’s pain.” If I tell you to use your mirror neurons, do you know what to do? Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex? Self-help can be a tricky business.

Cherry-picking

Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region: the amygdala. It is routinely described as the “ancient” or “primitive” brain, scarily atavistic. There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.) René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical – a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls. It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography. Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow and green lighting up what looks like a black intracranial vacuum. In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image. The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space. And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn”, but popular magazines, science websites and books are frenzied consumers and hypers of these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The “This is your brain on” meme, it seems, is indefinitely extensible: Google results offer “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on, ad nauseam. I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless: there is beautiful and amazing science in how they work and what well-designed experiments can teach us. “One of my favourites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day). This to me demonstrates something important – that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report. With measures like that, we can begin to see how valuable it is to measure brain activity – it is giving us information that would otherwise be hidden from us.”

In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error. It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

Steven Poole is the author of the forthcoming book “You Aren’t What You Eat”, which will be published by Union Books in October.

This article was updated on 18 September 2012.

This article first appeared in the 10 September 2012 issue of the New Statesman, Autumn politics special

Credit: BRIDGEMAN IMAGES
Show Hide image

A century ago, the Spanish flu killed 100 million people. Is a new pandemic on the way?

Our leaders need to act like the outbreak has already started – because for all we know it may have.

It is hard not to have a sneaking envy of the virus. As complex creatures, we are distracted by myriad demands on our attention; we will never know the dead-eyed focus of the viral world. It is akin to the psychopath: a cold, purposeful drive to achieve its own agenda, coupled with the skills and resourcefulness to succeed. In a world threatened by nuclear war and devastating climate change, it may actually be the virus that we should fear most.

This is the centenary year of the Spanish flu outbreak, when a virus killed between 50 and 100 million people in a matter of months. The devastation was worldwide; it is only known as Spanish flu because Spain, neutral in the ongoing hostilities of World War One, was the only country without press restrictions. Across Europe, people assumed their own outbreaks originated in the only place reporting on the disaster.

A number of authors have lined up with a kind of grim celebration of influenza’s annus mirabilis. As well as chronicling the fatal reach of this organism, they all offer a warning about a follow-up pandemic that is overdue – and for which, it seems, we are largely unprepared. “Somewhere out there a dangerous virus is boiling up in the bloodstream of a bird, bat, monkey, or pig, preparing to jump to a human being,” says Jonathan Quick in The End of Epidemics. “It has the potential to wipe out millions of us, including my family and yours, over a matter of weeks or months.”

If that seems a little shlocky, you should know that Quick is no quack. He is a former director at the WHO, the current chair of the Global Health Council and a faculty member at Harvard Medical School. The book’s blurb includes endorsements from the director of the London School of Hygiene and Tropical Medicine, the president of Médicins Sans Frontières, and the president of the Rockefeller Foundation.

The numbers Quick serves up are stupefying. Bill Gates, for instance, has said it is more likely than not that he will live to see a viral outbreak kill over 10 million people in a year. In Gates’s nightmare scenario, outlined by computer simulations created with disease-modelling experts, 33 million people die within 200 days of the first human infection. The potential for exponential spread means a death toll of 300 million is possible in the first year. “We would be in a world where scrappy, ravaged survivors struggle for life in a zombie-movie wasteland,” Quick tells us in his informed, cogent and – honestly – frightening book.

If you can’t imagine what that is like, you could try asking the Yupik people of Alaska, who were devastated by the 1918 Spanish flu. You might not get an answer, however, because they remain traumatised, and have made a pact not to speak about the pandemic that shattered their ancient culture.  (A pandemic is a disease that spreads across continents; an epidemic is usually contained within a country or continent.)They aren’t the only long-term sufferers. The Vanuatu archipelago suffered 90 per cent mortality and 20 of its local languages went extinct. Those in the womb in 1918 were also affected. A baby born in 1919 “was less likely to graduate and earn a reasonable wage, and more likely to go to prison, claim disability benefit, and suffer from heart disease,” reports Laura Spinney in Pale Rider.

Such arresting snippets of the flu’s legacy abound in Spinney’s thoughtful, coherent take on the 1918 outbreak. The book’s subtitle suggests that the Spanish flu changed the world, and Spinney certainly backs this up. Societies broke down and had to be rebuilt; recovering populations were reinvigorated by the simple calculus of Darwin’s “survival of the fittest”; public health provisions were first imagined and then brought into reality; artists and writers responded to a new global mood by establishing new movements.

Not every outcome could be spun as a positive. Scientists, for instance, were humiliated by their inability to halt the flu’s progress, creating an opportunity for quack medicines to arise and establish themselves. Some of our greatest writers lived through the trauma, but could never bring themselves to discuss it in their stories. Virginia Woolf noted that it was “strange indeed that illness has not taken its place with love and battle and jealousy among the prime themes of literature”.

Spinney’s background as a science writer shines through: her handling of the workings of the flu is detailed and deft. She brings both the influenza A virus (the only type responsible for pandemics) and the human immune system to life, laying out the biochemical processes that kill and cure with clarity and care. She exposes the chilling roots of often-used but seldom-explained viral names such as “H1N1” (Spanish flu) or “H5N1” (bird flu). H is for haemagglutinin, the lollipop-shaped appendage that allows a virus to break into a cell and take over the means of production. N is for neuraminidase, the “glass-cutter” structure that allows replicated viruses to break out again and unleash hell upon the host. So far, we know of 18 H’s and 11 N’s and they all have ever-evolving sub-types that make a long-lasting general vaccine against the flu an elusive dream: “Every flu pandemic of the 20th century was triggered by the emergence of a new H in influenza A,” says Spinney.

For all her technical expertise, Spinney has a light touch and a keen eye for the comic. She relates how a ferret sneezing in the face of a British researcher in 1933 exposed influenza’s ability to travel between biological species, for instance. She also excels with the bigger picture, detailing the century of scientific detective work that has allowed us to piece together the genetic elements of the 1918 virus and gain insights into its creation. It seems to have jumped to humans on a farm in Kansas, via domestic and wild birds indigenous to North America. There may also have been some ingredients from pigs, too, but that’s not settled.

Spinney’s afterword questions whether our collective memory for such events ever reflects the truth of the moment. “When the story of the Spanish flu was told, it was told by those who got off most lightly: the white and well off,” she tells us. “With very few exceptions, the ones who bore the brunt of it, those living in ghettoes or at the rim, have yet to tell their tale. Some, such as the minorities whose languages died with them, never will.”

That said, Catharine Arnold has done a remarkable job of relating the tales of a diverse set of sufferers, crafting an arresting and intimate narrative of the 1918 pandemic. She pulls the accounts of hundreds of victims into a gripping tale that swoops down into the grisly detail, then soars up to give a broad view over the landscape of this calamitous moment in human history.

Arnold’s remembrances come from the unknown and from celebrities. A Margery Porter from south London emphasised that “we just couldn’t stand up. Your legs actually gave way, I can’t exaggerate that too much.” John Steinbeck described the experience of infection as almost spiritual. “I went down and down,” he said, “until the wingtips of angels brushed my eyes.”

The reality was, inevitably, less poetic. A local surgeon removed one of Steinbeck’s ribs so that he could gain access to the author’s infected lung. Most victims’ bodies turned blue-black as they died. Healthcare workers reported appalling scenes, with delirious patients suffering horrific nosebleeds. “Sometimes the blood would just shoot across the room,” a navy nurse recalled. If their lungs punctured, the patients’ bodies would fill with air. “You would feel somebody and he would be bubbles… When their lungs collapsed, air was trapped beneath their skin. As we rolled the dead in winding sheets, their bodies crackled – an awful crackling noise with sounded like Rice Krispies when you pour milk over them.”

The killer in 1918 was often not the flu virus itself but the “cytokine storm” of an immune system overreacting to the infection. Strong, fit young people, with their efficient immune systems, were thus particularly at risk, their bodies effectively shutting themselves down. Then there were the ravages of opportunistic bacteria that would lodge in the devastated tissue, causing pneumonia and other fatal complications. Arnold paints a grim but vivid picture of exhausted gravediggers and opportunistic funeral directors cannily upping their prices. The morgues were overflowing, and morticians worked day and night. In the end, mass graves were the only answer for the poverty-stricken workers attempting to bury their loved ones before they, too, succumbed.

No one was spared from grief or suffering at the hands of the “Spanish Lady”, as the flu came to be known. Louis Brownlow, the city commissioner for Washington DC, reported nursing his stricken wife while answering telephone calls from desperate citizens. One woman called to say that of the three girls she shared a room with, two had died, and the third was on her way out. Brownlow sent a police officer to the house. A few hours later, the sergeant reported back from the scene: “Four girls dead.”

Some of the other stories Arnold has unearthed are equally heartbreaking. A Brooklyn boy called Michael Wind wrote of the moment his mother died after less than a day of being ill. He and his five siblings were at her bedside, as was their father, “head in hands, sobbing bitterly”. The following morning, knowing that he was soon to die too, their father took the three youngest children to the orphanage.

Arnold writes beautifully, and starkly, of the tragedy that unfolded in the autumn months of 1918: “the Spanish Lady played out her death march, killing without compunction. She did not discriminate between statesmen, painters, soldiers, poets, writers or brides.” She chronicles the Lady’s path from the United States and Canada through Europe, Africa and Asia, culminating in New Zealand’s “Black November”. The book is utterly absorbing. But how do we respond to its horrors and tragedies? What are we to do with our collective memories of such visceral, world-shattering events? Learn from them – and fast, argues Jonathan Quick.

Unlike Arnold and Spinney, Quick is not content to be a chronicler or a bystander. He is, he says, both terrified at the looming disaster and furious at the lack of high-level reaction to its threat. He is determined to create a movement that will instigate change, mimicking the way activists forced change from governments paralysed by, and pharmaceutical companies profiteering from, the Aids pandemic. Quick has channelled his fury: The End of Epidemics is, at heart, a call to arms against influenza, Ebola, Zika and the many other threats before us.

 

So what are we to do? First, our leaders need to act like the outbreak has already started – because for all we know it may have. We must strengthen our public health systems, and create robust agencies and NGOs ready to monitor and deal with the threat. We must educate citizens and implement surveillance, prevention and response mechanisms, while fighting misinformation and scaremongering. Governments must step up (and fund) research.

We can’t develop a vaccine until the threat is manifest, but we can prepare technology for fast large-scale production. We can also invest in methods of early diagnoses and virus identification. Invest $1 per person per year for 20 years and the threat will be largely neutralised, Quick suggests. Finally – and most importantly – there is an urgent need to create grass-roots support for these measures: citizen groups and other organisations that will hold their leaders to account and prevent death on a scale that no one alive has ever experienced. Is this achievable? Traumatised readers of Quick’s book will be left hoping that it is.

For all the advances of the last century, there are many unknowns. Scientists don’t know, for instance, which microbe will bring the next pandemic, where it will come from, or whether it will be transmitted through the air, by touch, through body fluids or through a combination of routes.

While there is considerable attention focused on communities in West Africa, East Asia or South America as the most likely source of the next outbreak, it’s worth remembering that most scientists now believe the 1918 influenza outbreak began on a farm in Kansas. Quick suggests the
next pandemic might have a similar geographical origin, thanks to the industrialised livestock facilities beloved by American food giants.

Viruses naturally mutate and evolve rapidly, taking up stray bits of genetic material wherever they can be found. But it’s the various flu strains that live inside animals that bring sleepless nights to those in the know. They can exist inside a pig, bat or chicken without provoking symptoms, but prove devastating if (when) they make the jump to humans. As more and more humans live in close proximity to domesticated animals, encroach on the territories inhabited by wild animals, and grow their food on unprecedented scales, our chance of an uncontrollable epidemic increase.

The meat factories known as “Concentrated Animal Feeding Operations” (CAFOs) are particularly problematic. They provide cheap meat, poultry, dairy and
eggs from animals kept in what Quick terms “concentration camp conditions”, simultaneously creating the perfect breeding ground for new and dangerous pathogens. Pigs, he points out, eat almost everything, so their guts are the perfect mixing bowls for a new and deadly influenza strain. “CAFOs were the birthplace of swine flu, and they could very likely be the birthplace of the next killer pandemic,” Quick warns.

There are other possibilities, though – bioterror, for instance. Bill Gates is among
those who have warned that terrorist groups are looking into the possibility of releasing the smallpox virus in a crowded market, or on a plane. Then there is the possibility of a scientist’s mistake. In 1978 a woman died after smallpox was released from a laboratory at the University of Birmingham, UK. In 2004 two Chinese researchers accidentally infected themselves with the SARS virus and spread it to seven other people, one of whom died. In 2014, a cardboard box full of forgotten vials of smallpox was found in a National Institutes of Health facility in Bethesda, Maryland. A year later, the US military accidentally shipped live anthrax spores to labs in the US and a military base in South Korea. It’s not impossible that human error could strike again – with catastrophic results.

Such possibilities lie behind our discomfort with what scientists have to do to further our understanding. Researchers in Rotterdam, for instance, wanted to know whether the deadly H5N1 bird flu could develop a capacity for airborne transmission like the common cold virus. Having failed to modify its genetics to achieve this, they began to pass an infection between ferrets, the animals whose response to the virus most mimics that of humans. Ten ferrets later, healthy animals were catching the virus from the cage next door. Knowing how easily H5N1 can become airborne is exactly the kind of discovery that will bolster our vigilance. It is, after all, many times more fatal than the H1N1 strain that caused the Spanish flu. At the same time, there was a huge – but understandable –
furore over whether the research should
be published, and thus be available to potential bioterrorists.

We might have to live with such dilemmas, because it is important to be ready to challenge the killer virus when it arrives. As we have seen with Aids and the common cold, developing vaccines takes time, and there is no guarantee of success, even with a concerted research effort.

****

Will we be ready? Quick suggests that our best chance lies in the world’s business leaders realising what’s at stake: economies would be devastated by the next pandemic. In 1918, Arnold points out, the British government was telling citizens it was their patriotic duty to “carry on” and make sure the wheels of industry kept turning. The result was a perfect environment for mass infection. Political leaders made similar mistakes across the Atlantic: on 12 October President Wilson led a gathering of 25,000 New Yorkers down the “Avenue of the Allies”. “That same week,” Arnold reports, “2,100 New Yorkers died of influenza.”

It’s worth noting that Spanish flu did not abate because we outsmarted it. The pandemic ended because the virus ran out of people it could infect. Of those who didn’t die, some survived through a chance natural immunity, and some were lucky enough to have maintained a physical separation from those carrying the invisible threat. The virus simply failed to kill the rest, enabling their bodies to develop the antibodies required to repel a further attack. A generation or two later, when the antibody-equipped immune systems were in the grave, and humans were immunologically vulnerable (and complacent) once again, H1N1 virus re-emerged, causing the 2009 swine flu outbreak.

As these books make clear, this is a history that could repeat all too easily in our time. Of the three, Pale Rider is perhaps the most satisfying. It has greater complexity and nuance than Arnold’s collection of harrowing tales, fascinating though they are. Spinney’s analysis is more circumspect and thus less paralysing than Quick’s masterful exposition of our precarious situation. But the truth is we need all these perspectives, and probably more, if we are to avoid sleepwalking into the next pandemic. Unlike our nemesis, humans lack focus – and it could be our undoing. 

Michael Brooks’s most recent book is “The Quantum Astrologer’s Handbook” (Scribe)

Pale Rider: The Spanish Flu of 1918 and How it Changed the World
Laura Spinney
Vintage, 352pp, £25

Pandemic 1918: The Story of the Deadliest Influenza in History
Catharine Arnold
Michael O’Mara, 368pp, £20

The End of Epidemics
Jonathan D Quick with Bronwyn Fryer
Scribe, 288pp, £14.99

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 10 September 2012 issue of the New Statesman, Autumn politics special