This is a metaphor. Photograph: Getty Images
Show Hide image

Your brain on pseudoscience: the rise of popular neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat?

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Shades of grey

The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Nor do we have the faintest clue about the biggest mystery of all – how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph? How come the brain gives rise to the mind? No one knows.

So, instead, here is a recipe for writing a hit popular brain book. You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril. You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair. You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being. Voilà, a laboratory-sanctioned Big Idea in digestible narrative form. This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell. A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer. The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls and zany overinterpretation of research findings in neuroscience and psychology. Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless”, but such garbage needs to be denounced precisely in defence of the achievements of science. (In a sense, as Paul Fletcher points out, such books are “anti science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.) More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker. To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit. Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals. Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others. (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.) The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled “Smart Thinking”, stocked with pop brain tracts. The true function of such books, of course, is to free readers from the responsibility of thinking for themselves. This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning. “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy. I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC. You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want. In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day. Our own brain-as-computer metaphor has been around for decades: there is the “hardware”, made up of different physical parts (the brain), and the “software”, processing routines that use different neuronal “circuits”. Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink – whose motivational selfhelp slogan is that “we can control rapid cognition” – burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software”, though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.) But these writers tend to reach for just one functional story about a brain subsystem – the story that fits with their Big Idea – while ignoring other roles the same system might play. This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition”, as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine? (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.) Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathised, and felt the other’s pain.” If I tell you to use your mirror neurons, do you know what to do? Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex? Self-help can be a tricky business.


Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region: the amygdala. It is routinely described as the “ancient” or “primitive” brain, scarily atavistic. There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.) René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical – a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls. It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography. Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow and green lighting up what looks like a black intracranial vacuum. In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image. The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space. And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn”, but popular magazines, science websites and books are frenzied consumers and hypers of these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The “This is your brain on” meme, it seems, is indefinitely extensible: Google results offer “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on, ad nauseam. I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless: there is beautiful and amazing science in how they work and what well-designed experiments can teach us. “One of my favourites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day). This to me demonstrates something important – that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report. With measures like that, we can begin to see how valuable it is to measure brain activity – it is giving us information that would otherwise be hidden from us.”

In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error. It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

Steven Poole is the author of the forthcoming book “You Aren’t What You Eat”, which will be published by Union Books in October.

This article was updated on 18 September 2012.

This article first appeared in the 10 September 2012 issue of the New Statesman, Autumn politics special

Show Hide image

David Cameron's fatal insouciance

Will future historians remember the former prime minister for anything more than his great Brexit bungle?

On 13 July 2016, after a premiership lasting six years and 63 days, David Cameron left Downing Street for the last time. On the tarmac outside the black door, with his wife and children at his side, he gave a characteristically cool and polished parting statement. Then he got in his car for the last journey to Buckingham Palace – the picture, as ever, of insouciant ease. As I was watching the television pictures of Cameron’s car gliding away, I remembered what he is supposed to have said some years earlier, when asked why he wanted to be prime minister. True or not, his answer perfectly captured the public image of the man: “Because I think I’d be rather good at it.”

A few moments later, a friend sent me a text message. It was just six words long: “He’s down there with Chamberlain now.”

At first I thought that was a bit harsh. People will probably always disagree about Cameron’s economic record, just as they do about Margaret Thatcher’s. But at the very least it was nowhere near as bad as some of his critics had predicted, and by some standards – jobs created, for instance – it was much better than many observers had expected. His government’s welfare and education policies have their critics, but it seems highly unlikely that people will still be talking about them in a few decades’ time. Similarly, although Britain’s intervention in Libya is unlikely to win high marks from historians, it never approached the disaster of Iraq in the public imagination.

Cameron will probably score highly for his introduction of gay marriage, and although there are many people who dislike him, polls suggested that most voters regarded him as a competent, cheerful and plausible occupant of the highest office in the land. To put it another way, from the day he entered 10 Downing Street until the moment he left, he always looked prime ministerial. It is true that he left office as a loser, humiliated by the EU referendum, and yet, on the day he departed, the polls had him comfortably ahead of his Labour opposite number. He was, in short, popular.
On the other hand, a lot of people liked Neville Chamberlain, too. Like Chamberlain, Cameron seems destined to be remembered for only one thing. When students answer exam questions about Chamberlain, it’s a safe bet that they aren’t writing about the Holidays with Pay Act 1938. And when students write about Cameron in the year 2066, they won’t be answering questions about intervention in Libya, or gay marriage. They will be writing about Brexit and the lost referendum.

It is, of course, conceivable, though surely very unlikely, that Brexit will be plain sailing. But it is very possible that it will be bitter, protracted and enormously expensive. Indeed, it is perfectly conceivable that by the tenth anniversary of the referendum, the United Kingdom could be reduced to an English and Welsh rump, struggling to come to terms with a punitive European trade deal and casting resentful glances at a newly independent Scotland. Of course the Brexiteers – Nigel Farage, Boris Johnson, Michael Gove, Daniel Hannan et al – would get most of the blame in the short run. But in the long run, would any of them really be remembered? Much more likely is that historians’ fingers would point at one man: Cameron, the leader of the Conservative and Unionist Party, the prime minister who gambled with his future and lost the Union. The book by “Cato” that destroyed Chamberlain’s reputation in July 1940 was entitled Guilty Men. How long would it be, I wonder, before somebody brought out a book about Cameron, entitled Guilty Man?

Naturally, all this may prove far too pessimistic. My own suspicion is that Brexit will turn out to be a typically European – or, if you prefer, a typically British – fudge. And if the past few weeks’ polls are anything to go by, Scottish independence remains far from certain. So, in a less apocalyptic scenario, how would posterity remember David Cameron? As a historic failure and “appalling bungler”, as one Guardian writer called him? Or as a “great prime minister”, as Theresa May claimed on the steps of No 10?

Neither. The answer, I think, is that it would not remember him at all.


The late Roy Jenkins, who – as Herbert Asquith’s biographer, Harold Wilson’s chancellor and Jim Callaghan’s rival – was passionately interested in such things, used to write of a “market” in prime ministerial futures. “Buy Attlee!” he might say. “Sell Macmillan!” But much of this strikes me as nonsense. For one thing, political reputations fluctuate much less than we think. Many people’s views of, say, Wilson, Thatcher and Blair have remained unchanged since the day they left office. Over time, reputations do not change so much as fade. Academics remember prime ministers; so do political anoraks and some politicians; but most people soon forget they ever existed. There are 53 past prime ministers of the United Kingdom, but who now remembers most of them? Outside the university common room, who cares about the Marquess of Rockingham, the Earl of Derby, Lord John Russell, or Arthur Balfour? For that matter, who cares about Asquith or Wilson? If you stopped people in the streets of Sunderland, how many of them would have heard of Stanley Baldwin or Harold Macmillan? And even if they had, how much would they ­really know about them?

In any case, what does it mean to be a success or a failure as prime minister? How on Earth can you measure Cameron’s achievements, or lack of them? We all have our favourites and our prejudices, but how do you turn that into something more dispassionate? To give a striking example, Margaret Thatcher never won more than 43.9 per cent of the vote, was roundly hated by much of the rest of the country and was burned in effigy when she died, long after her time in office had passed into history. Having come to power promising to revive the economy and get Britain working again, she contrived to send unemployment well over three million, presided over the collapse of much of British manufacturing and left office with the economy poised to plunge into yet another recession. So, in that sense, she looks a failure.

Yet at the same time she won three consecutive general elections, regained the Falklands from Argentina, pushed through bold reforms to Britain’s institutions and fundamentally recast the terms of political debate for a generation to come. In that sense, clearly she was a success. How do you reconcile those two positions? How can you possibly avoid yielding to personal prejudice? How, in fact, can you reach any vaguely objective verdict at all?

It is striking that, although we readily discuss politicians in terms of success and failure, we rarely think about what that means. In some walks of life, the standard for success seems obvious. Take the other “impossible job” that the tabloids love to compare with serving as prime minister: managing the England football team. You can measure a football manager’s success by trophies won, qualifications gained, even points accrued per game, just as you can judge a chief executive’s performance in terms of sales, profits and share values.

There is no equivalent for prime ministerial leadership. Election victories? That would make Clement Attlee a failure: he fought five elections and won only two. It would make Winston Churchill a failure, too: he fought three elections and won only one. Economic growth? Often that has very little to do with the man or woman at the top. Opinion polls? There’s more to success than popularity, surely. Wars? Really?

The ambiguity of the question has never stopped people trying. There is even a Wikipedia page devoted to “Historical rankings of Prime Ministers of the United Kingdom”, which incorporates two surveys of academics carried out by the University of Leeds, a BBC Radio 4 poll of Westminster commentators, a feature by BBC History Magazine and an online poll organised by Newsnight. By and large, there is a clear pattern. Among 20th-century leaders, there are four clear “successes” – Lloyd George, Churchill, Attlee and Thatcher – with the likes of Macmillan, Wilson and Heath scrapping for mid-table places. At the bottom, too, the same names come up again and again: Balfour, Chamberlain, Eden, Douglas-Home and Major. But some of these polls are quite old, dating back to the Blair years. My guess is that if they were conducted today, Major might rise a little, especially after the success of Team GB at the Olympics, and Gordon Brown might find himself becalmed somewhere towards the bottom.


So what makes the failures, well, failures? In two cases, the answer is simply electoral defeat. Both ­Arthur Balfour and John Major were doomed to failure from the moment they took office, precisely because they had been picked from within the governing party to replace strong, assertive and electorally successful leaders in Lord Salisbury and Margaret Thatcher, respectively. It’s true that Major unexpectedly won the 1992 election, but in both cases there was an atmosphere of fin de régime from the very beginning. Douglas-Home probably fits into this category, too, coming as he did at the fag end of 13 years of Conservative rule. Contrary to political mythology, he was in fact a perfectly competent prime minister, and came much closer to winning the 1964 election than many people had expected. But he wasn’t around for long and never really captured the public mood. It seems harsh merely to dismiss him as a failure, but politics is a harsh business.

That leaves two: Chamberlain and Eden. Undisputed failures, who presided over the greatest foreign policy calamities in our modern history. Nothing to say, then? Not so. Take Chamberlain first. More than any other individual in our modern history, he has become a byword for weakness, naivety and self-deluding folly.

Yet much of this picture is wrong. Chamberlain was not a weak or indecisive man. If anything, he was too strong: too stubborn, too self-confident. Today we remember him as a faintly ridiculous, backward-looking man, with his umbrella and wing collar. But many of his contemporaries saw him as a supremely modern administrator, a reforming minister of health and an authoritative chancellor who towered above his Conservative contemporaries. It was this impression of cool capability that secured Chamberlain the crown when Baldwin stepped down in 1937. Unfortunately, it was precisely his titanic self-belief, his unbreakable faith in his own competence, that also led him to overestimate his influence over Adolf Hitler. In other words, the very quality that people most admired – his stubborn confidence in his own ability – was precisely what doomed him.

In Chamberlain’s case, there is no doubt that he had lost much of his popular prestige by May 1940, when he stepped down as prime minister. Even though most of his own Conservative MPs still backed him – as most of Cameron’s MPs still backed him after the vote in favour of Brexit – the evidence of Mass Observation and other surveys suggests that he had lost support in the country at large, and his reputation soon dwindled to its present calamitous level.

The case of the other notable failure, Anthony Eden, is different. When he left office after the Suez crisis in January 1957, it was not because the public had deserted him, but because his health had collapsed. Surprising as it may seem, Eden was more popular after Suez than he had been before it. In other words, if the British people had had their way, Eden would probably have continued as prime minister. They did not see him as a failure at all.

Like Chamberlain, Eden is now generally regarded as a dud. Again, this may be a bit unfair. As his biographers have pointed out, he was a sick and exhausted man when he took office – the result of two disastrously botched operations on his gall bladder – and relied on a cocktail of painkillers and stimulants. Yet, to the voters who handed him a handsome general election victory in 1955, Eden seemed to have all the qualities to become an enormously successful prime minister: good looks, brains, charm and experience, like a slicker, cleverer and more seasoned version of Cameron. In particular, he was thought to have proved his courage in the late 1930s, when he had resigned as foreign secretary in protest at the appeasement of Benito Mussolini before becoming one of Churchill’s chief lieutenants.

Yet it was precisely Eden’s great asset – his reputation as a man who had opposed appeasement and stood up to the dictators – that became his weakness. In effect, he became trapped by his own legend. When the Egyptian dictator Gamal Abdel Nasser nationalised the Suez Canal in July 1956, Eden seemed unable to view it as anything other than a replay of the fascist land-grabs of the 1930s. Nasser was Mussolini; the canal was Abyssinia; ­failure to resist would be appeasement all over again. This was nonsense, really: Nasser was nothing like Mussolini. But Eden could not escape the shadow of his own political youth.

This phenomenon – a prime minister’s greatest strength gradually turning into his or her greatest weakness – is remarkably common. Harold Wilson’s nimble cleverness, Jim Callaghan’s cheerful unflappability, Margaret Thatcher’s restless urgency, John Major’s Pooterish normality, Tony Blair’s smooth charm, Gordon Brown’s rugged seriousness: all these things began as refreshing virtues but became big handicaps. So, in that sense, what happened to Chamberlain and Eden was merely an exaggerated version of what happens to every prime minister. Indeed, perhaps it is only pushing it a bit to suggest, echoing Enoch Powell, that all prime ministers, their human flaws inevitably amplified by the stresses of office, eventually end up as failures. In fact, it may not be too strong to suggest that in an age of 24-hour media scrutiny, surging populism and a general obsession with accountability, the very nature of the job invites failure.


In Cameron’s case, it would be easy to construct a narrative based on similar lines. Remember, after all, how he won the Tory leadership in the first place. He went into the 2005 party conference behind David Davis, the front-runner, but overhauled him after a smooth, fluent and funny speech, delivered without notes. That image of blithe nonchalance served him well at first, making for a stark contrast with the saturnine intensity and stumbling stiffness of his immediate predecessors, Michael Howard and Iain Duncan Smith. Yet in the end it was Cameron’s self-confidence that really did for him.

Future historians will probably be arguing for years to come whether he really needed to promise an In/Out referendum on the UK’s membership of the EU, as his defenders claim, to protect his flank against Ukip. What is not in doubt is that Cameron believed he could win it. It became a cliché to call him an “essay crisis” prime minister – a gibe that must have seemed meaningless to millions of people who never experienced the weekly rhythms of the Oxford tutorial system. And yet he never really managed to banish the impression of insouciance. The image of chillaxing Dave, the PM so cockily laidback that he left everything until the last minute, may be a caricature, but my guess is that it will stick.

As it happens, I think Cameron deserves more credit than his critics are prepared to give him. I think it would be easy to present him as a latter-day Baldwin – which I mean largely as a compliment. Like Baldwin, he was a rich provincial Tory who posed as an ordinary family man. Like Baldwin, he offered economic austerity during a period of extraordinary international financial turmoil. Like Baldwin, he governed in coalition while relentlessly squeezing the Liberal vote. Like Baldwin, he presented himself as the incarnation of solid, patriotic common sense; like Baldwin, he was cleverer than his critics thought; like Baldwin, he was often guilty of mind-boggling complacency. The difference is that when Baldwin gambled and lost – as when he called a rash general election in 1923 – he managed to save his career from the ruins. When Cameron gambled and lost, it was all over.

Although I voted Remain, I do not share many commentators’ view of Brexit as an apocalyptic disaster. In any case, given that a narrow majority of the electorate got the result it wanted, at least 17 million people presumably view Cameron’s gamble as a great success – for Britain, if not for him. Unfortunately for Cameron, however, most British academics are left-leaning Remainers, and it is they who will write the history books. What ought also to worry Cameron’s defenders – or his shareholders, to use Roy Jenkins’s metaphor – is that both Chamberlain and Eden ended up being defined by their handling of Britain’s foreign policy. There is a curious paradox here, ­because foreign affairs almost never matters at the ballot box. In 1959, barely three years after Suez, the Conservatives cruised to an easy re-election victory; in 2005, just two years after invading Iraq, when the extent of the disaster was already apparent, Blair won a similarly comfortable third term in office. Perhaps foreign affairs matters more to historians than it does to most voters. In any case, the lesson seems to be that, if you want to secure your historical reputation, you can get away with mishandling the economy and lengthening the dole queues, but you simply cannot afford to damage Britain’s international standing.

So, if Brexit does turn into a total disaster, Cameron can expect little quarter. Indeed, while historians have some sympathy for Chamberlain, who was, after all, motivated by a laudable desire to avoid war, and even for Eden, who was a sick and troubled man, they are unlikely to feel similar sympathy for an overconfident prime minister at the height of his powers, who seems to have brought his fate upon himself.

How much of this, I wonder, went through David Cameron’s mind in the small hours of that fateful morning of 24 June, as the results came through and his place in history began to take shape before his horrified eyes? He reportedly likes to read popular history for pleasure; he must occasionally have wondered how he would be remembered. But perhaps it meant less to him than we think. Most people give little thought to how they will be remembered after their death, except by their closest friends and family members. There is something insecure, something desperately needy, about people who dwell on their place in history.

Whatever you think about Cameron, he never struck me as somebody suffering from excessive insecurity. Indeed, his normality was one of the most likeable things about him.

He must have been deeply hurt by his failure. But my guess is that, even as his car rolled away from 10 Downing Street for the last time, his mind was already moving on to other things. Most prime ministers leave office bitter, obsessive and brooding. But, like Stanley Baldwin, Cameron strolled away from the job as calmly as he had strolled into it. It was that fatal insouciance that brought him down. 

Dominic Sandbrook is a historian, broadcaster and columnist for the Daily Mail. His book The Great British Dream Factory will be published in paperback by Penguin on 1 September

Dominic Sandbrook is a historian and author. His books include Never Had It So Good: A History of Britain from Suez to the Beatles and White Heat: A History of Britain in the Swinging Sixties. He writes the What If... column for the New Statesman.

This article first appeared in the 25 August 2016 issue of the New Statesman, Cameron: the legacy of a loser