This is a metaphor. Photograph: Getty Images
Show Hide image

Your brain on pseudoscience: the rise of popular neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat?

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Shades of grey

The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Nor do we have the faintest clue about the biggest mystery of all – how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph? How come the brain gives rise to the mind? No one knows.

So, instead, here is a recipe for writing a hit popular brain book. You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril. You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair. You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being. Voilà, a laboratory-sanctioned Big Idea in digestible narrative form. This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell. A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer. The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls and zany overinterpretation of research findings in neuroscience and psychology. Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless”, but such garbage needs to be denounced precisely in defence of the achievements of science. (In a sense, as Paul Fletcher points out, such books are “anti science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.) More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker. To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit. Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals. Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others. (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.) The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled “Smart Thinking”, stocked with pop brain tracts. The true function of such books, of course, is to free readers from the responsibility of thinking for themselves. This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning. “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy. I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC. You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want. In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day. Our own brain-as-computer metaphor has been around for decades: there is the “hardware”, made up of different physical parts (the brain), and the “software”, processing routines that use different neuronal “circuits”. Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink – whose motivational selfhelp slogan is that “we can control rapid cognition” – burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software”, though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.) But these writers tend to reach for just one functional story about a brain subsystem – the story that fits with their Big Idea – while ignoring other roles the same system might play. This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition”, as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine? (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.) Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathised, and felt the other’s pain.” If I tell you to use your mirror neurons, do you know what to do? Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex? Self-help can be a tricky business.

Cherry-picking

Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region: the amygdala. It is routinely described as the “ancient” or “primitive” brain, scarily atavistic. There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.) René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical – a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls. It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography. Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow and green lighting up what looks like a black intracranial vacuum. In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image. The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space. And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn”, but popular magazines, science websites and books are frenzied consumers and hypers of these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The “This is your brain on” meme, it seems, is indefinitely extensible: Google results offer “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on, ad nauseam. I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless: there is beautiful and amazing science in how they work and what well-designed experiments can teach us. “One of my favourites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day). This to me demonstrates something important – that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report. With measures like that, we can begin to see how valuable it is to measure brain activity – it is giving us information that would otherwise be hidden from us.”

In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error. It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

Steven Poole is the author of the forthcoming book “You Aren’t What You Eat”, which will be published by Union Books in October.

This article was updated on 18 September 2012.

This article first appeared in the 10 September 2012 issue of the New Statesman, Autumn politics special

Cargo
Show Hide image

The age of lies: how politicians hide behind statistics

Perhaps it is time to combine our Trump-era, heightened sensitivity to untruths with a new broadcasting technique or two.

The small slabs of crude election soundbites, with extra ornamentation in the form of half-true and meaningless headline statistics, clunk across the airwaves, and we grimace. The dead prose reaches us umpteen times a day – “an economy that works for all”, “the many and not the few”, “work is the way out of poverty”, “more being spent on our schools than ever before”, “the NHS is treating more patients than ever ­before”, “fastest growth rate in Europe”, “the national interest”, “the most ­important election in my lifetime” – and yes, let’s hear it for “strong and stable leadership”.

On 30 April, Andrew Marr tried a little witty and civilised pre-emptive mocking to stop Theresa May using soundbites in his interview with her, but it did not work because it could not work. Embarrassment about clichés and almost idiotic numbers is not what democratic politicians worry about at election time. Many of us may pine for the old American game-show device – where, for failing to amuse and divert the audience, contestants are removed from the fray by a man hammering a gong – but that is not on offer and, in election mode, the politicians will do as they have long learned to do. They will listen to the Lynton Crosbys and Seumas Milnes of this world and plough on – and on.

The soundbites are largely vacuous and we are more noisily sardonic about them than three decades ago (hooray for media literacy) but they aren’t worse than normal. There is no point expecting the debate to run on the lines of Gladstone’s Midlothian campaign 140 years ago, when he charged around Britain giving five-hour speeches – richly informed by Liberal philosophy – which did the trick for him and his party.

The clichés are, naturally, often interchangeable. Everybody running for high political office could quite contentedly utter any or all of the above phrases, though I concede it doesn’t require an inspired analyst of modern British politics to know what Theresa May is trying to do with her leadership riff – nor Jeremy Corbyn with his “rule for the many and not the few”, a phrase that has been used religiously since the adoption of universal suffrage. Only Jacob Rees-Mogg would put it to one side.

I spent almost 30 years at the BBC – working with a cadre of (mostly) hugely talented and impartial presenters and editors trying to find ways of injecting a bit more surprise or rigour into political interviews. (Surprise and rigour are often not the same thing.) I recall David Dimbleby reducing Alastair Campbell to semi-public fury in 1997 by excavating Tony Blair’s early political career and finding, neither surprisingly nor, in my view, particularly reprehensibly, that he had said Michael Foot-like things in a Michael Foot-like era. Oddly, nobody had thought to do this after he had been elected leader three years earlier, so Dimbleby’s approach to Blair had an element of ­surprise. And then there was John Humphrys’s relentless needling of Gordon Brown for his comic refusal after the 2008 financial crash to use the word “cuts” to describe what might have to happen to reduce the budget deficit, or even to agree with his own chancellor, Alistair Darling, that the global economic outlook was very bad. Brown had an on-air mega-curdle.

We know the score – the politicians find the rhetorical and statistical position that provides the best short-term defensive crouch, while the interviewer at least wants to make sure that the audience knows the question posed is relevant, fair and, if need be, that it has been dodged. Time presses on both participants – but the impact of the compression is unequal. The interviewee usually has the upper hand. In her early period Margaret Thatcher, who was a good deal more nervous than her subsequent reputation for clarity and authority would suggest, might well have been the all-time queen of interview delay tactics. However, most interviewees know that once they have found an answer to a question the first thing to do is to pad it out in case the next question is a little more difficult.

I am not outraged by any of this; nor do I believe these encounters should be dismissed as sterile, or that we should be contemptuous of the skills involved on either side of the exchange. The sort of one-sided triumph enjoyed by LBC’s Nick Ferrari with Diane Abbott is rare, and her numerical amnesia over policing made a whole argument go kerplunk – but even in more orthodox interviews you can often detect at the very least a broad weakness in a broad argument.

To my ear Corbyn sounds perpetually unsteady on defence policy (see his Marr interview in the first week of the campaign) and public finances, and neither May nor David Cameron before her manages much fluency on the impact of cuts on the working poor once they have uttered that threadbare soundbite about work being the route out of poverty. Would that it were so simple.

Our willingness to dismiss as boring these interviews, the staple of daily current affairs programmes, is overdone. And we have been a little graceless about the extent to which senior UK politicians do – or did – engage in at least some forms of public debate. Anyone who follows the US media will know how rare it has always been for senior members of the administration and White House staff to expose themselves to the sort of scrutiny still supplied by the Sunday political shows, Radio 4 current affairs programmes, Newsnight or Channel 4 News.

For decades, senior politicians in the UK turned up in the studios – often with scarcely concealed irritation – but they went through with it. In part because it was expected and in part out of self-interest. Good interview performances could lead to rapid promotion. Iain Duncan Smith was (you may be surprised by this) particularly effective in his early years at advocating his causes, and his party’s, in front of a microphone. But the studios did for him when he became Tory leader. As it turned out, his failings were more obvious when confronted by a skilled interviewer than in the House of Commons. His nervous coughing finally caught up with him one morning on the Today programme, and that was that.

Duncan Smith and Abbott are far from alone in seeing their currency plummet as a result of losing the plot in an interview. Harriet Harman, normally a highly fluent and agile politician, was sacked as social security secretary in 1998 after a grim outing, at least for her, with John Humphrys – caused not by his abrasiveness nor by any Abbott-like forgetfulness, but by her almost tangible unhappiness with a New Labour policy she was defending.

Even now, on BBC Question Time, some heavyweights will turn up only to be mauled by the voters on topics a long way away from the heart of their portfolio. Yes, they get copious notes from party researchers and have endless rehearsals to minimise the chance of saying anything too intellectually lively: but they should nevertheless get credit for risking it in the first place.

However, outside election time this tradition of broadcasting interrogation and debate, not much more than 50 years old, is under stealthy attack. The presenting team on Today is seriously good, but it is hard not to notice that the heavy hitters turn up less often for their ten minutes of duelling; similarly with Newsnight and Channel 4 News.

The Prime Minister’s Olympian approach to this sort of public engagement aggravates what was already a problem. The broadcasters may be losing ground. In this election there will be no head-to-head leaders’ debates featuring Labour and the Conservatives, and there is no great uproar about it. As it happens, I don’t believe that their absence is a disaster – not least because the format of individual leaders confronting an engaged Question Time audience one at a time (a “tradition” that began in 1997) provides far more substance and revelation than the 2010 or 2015 leaders’ debates did.

In the meantime, what can be done to the interview to improve the quality of public debate? Forcing out the clichés is not a realistic goal. Yet perhaps it is time to combine our Trump-era, heightened sensitivity to untruths with a new broadcasting technique or two. The BBC Trust (which I was part of for two years until it ceased to exist in April) commissioned its final independent editorial report on the BBC’s use of statistics from a panel of experts chaired by the former UK chief statistician Jil Matheson.

It is a superb piece of work. Above all it pleads with the BBC to do more to put statistics in context. The work was largely complete before the EU referendum so it did not pass judgement about either the veracity of the Brexiteers’ “extra £350m for the NHS” claim or the BBC’s coverage of that claim. I listened and watched a lot and, contrary to the views of many leading members of the Remain campaign, the BBC seemed to me to have consistently signalled to the audience the risible nature of the figure, if not as rudely as many would have liked.

Yet there is a different perspective on that cause célèbre. Only very rarely did the BBC on air (or anyone else, for that matter) compare the sums involved with total UK public expenditure: a net annual payment to the EU of about £8.5bn, compared to public expenditure of about £785bn. This £8.5bn is not a trivial sum – and it is likely to sound gargantuan to an unskilled worker on low wages in Hartlepool – but it hardly threatens the nation’s existence. We will have to think about that number all over again when the EU divorce bill gets paid.

In the past few years there has been a welcome growth online of fact-checking websites that get to grips with some of the half-sense or nonsense uttered – sometimes deliberately – in public debate. Among the broadcasters, Channel 4 News got in first with “FactCheck” and deserves great credit for having done so. The BBC has Reality Check; there are also the non-aligned Full Fact and others. And the Institute for Fiscal Studies (IFS) sits as a mega-authority when it pronounces on individual economic statistics. (It was a particularly dispiriting episode when the IFS took a pounding during the EU campaign.)

The good newspapers and the broadcasters have correspondents who can – and do – understand the context in which statistical argument takes place. They know the difference between a big number and a not-so-big number, the difference between an aggregate spending figure and spending per head of population, the difference in importance between a one-month figure and a trend – and a trend that does not change much over time.

This is all good, and better than it used to be. But perhaps more of this rigour can be woven into what is still the dominant form of political accountability in broadcasting: the interview.

So let us try a thought experiment. Imagine (though we don’t really have to imagine) that the Health Secretary, Jeremy Hunt, comes into a studio to say, surprise, surprise, that more is being spent in real terms on the NHS than ever before. Imagine that he is told there will be no questions on anything else until he can answer, let’s say, two obvious supplementary questions: in the course of the past 60 years how often has his assertion not been true? (Answer, says the IFS: four times, one of which was 2011/12.) And what has been the growth in per capita NHS spend, in real terms, since 2009/10, compared to the previous 15 years or so? (Answer: 0.6 per cent, as opposed to 5.4 per cent.) Answering these would show that his boast is one that almost all of his predecessors could have made, and also that the Conservative-led coalition was less generous to the health service than the preceding Labour government. It would be absolutely fair for Jeremy Hunt to respond vigorously about the need to cut the deficit or even to make points about who was in government when the crash happened – but he could not be allowed to get away with statistical near-rubbish.

Similarly, the mantra on English education (“Our schools are getting more money than ever before”) is a waste of air. It’s not that the cuts are “vicious” – just that the assertion when put in context is gibberish. The economy is growing and the school population is growing, fast. So if we were not spending more in total, and in real terms, then the cuts would be vicious. And yet, per head, there will be less in real terms for pupils. Period.

The front-line interviewers I know best are very skilled journalists and they often do try to get a jab in when the numerical nonsense gets going – but they have to move on, whether to other urgent matters or to seek a news headline from the interview, and there is not enough jeopardy for the press officer or spin doctor who wrote the politician’s brief to desist from writing the same stuff next time around.

There may be other ways of levelling up matters. The interview could proceed as normal; but at the end of it up could pop, say, Tim Harford (of the brilliant statistics programme More or Less on Radio 4) to put in the necessary corrections. It would have to be done within a few minutes or else the impact would dissipate. From time to time, Harford or his equivalent does appear after a political interviewee has spouted statistically illiterate twaddle – but not often enough, and usually this happens long after the attempted mugging of intelligent debate. Too little, too late.

It would be obligatory to ensure that this type of treatment, particularly at election time, was meted out to all the parties – but outside the election it is the government of the day and its news departments that are going to have to face most of the music. Fair enough.

My suggestion is not put forward because I am advocating a particular party’s reading of the state of the nation (or nations). There is no monopoly on vice. We should not forget Labour’s “triple counting” of health service spending after 1997 even if Blair/Brown subsequently, in benign economic circumstances, did indeed put their foot on the health-spending accelerator.

Rather, when the election dust settles and the media seminar post-mortems crank up yet again – about the level of turnout, political ennui, the particular disengagement of the young, the coverage of the leaders, the role of opinion polls and other staples – we need to keep working on how to improve the quality of public debate. It is not all awful, and a stylised contempt for what is good is itself corrupting of democracy. But the numbers nonsense needs fixing. 

Mark Damazer is Master of St Peter’s College, Oxford, and was the controller of BBC Radio 4 from 2004 to 2010

This article first appeared in the 18 May 2017 issue of the New Statesman, Age of Lies

0800 7318496