This is a metaphor. Photograph: Getty Images
Show Hide image

Your brain on pseudoscience: the rise of popular neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat?

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Shades of grey

The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Nor do we have the faintest clue about the biggest mystery of all – how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph? How come the brain gives rise to the mind? No one knows.

So, instead, here is a recipe for writing a hit popular brain book. You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril. You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair. You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being. Voilà, a laboratory-sanctioned Big Idea in digestible narrative form. This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell. A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer. The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls and zany overinterpretation of research findings in neuroscience and psychology. Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless”, but such garbage needs to be denounced precisely in defence of the achievements of science. (In a sense, as Paul Fletcher points out, such books are “anti science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.) More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker. To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit. Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals. Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others. (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.) The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled “Smart Thinking”, stocked with pop brain tracts. The true function of such books, of course, is to free readers from the responsibility of thinking for themselves. This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning. “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy. I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC. You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want. In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day. Our own brain-as-computer metaphor has been around for decades: there is the “hardware”, made up of different physical parts (the brain), and the “software”, processing routines that use different neuronal “circuits”. Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink – whose motivational selfhelp slogan is that “we can control rapid cognition” – burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software”, though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.) But these writers tend to reach for just one functional story about a brain subsystem – the story that fits with their Big Idea – while ignoring other roles the same system might play. This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition”, as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine? (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.) Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathised, and felt the other’s pain.” If I tell you to use your mirror neurons, do you know what to do? Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex? Self-help can be a tricky business.

Cherry-picking

Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region: the amygdala. It is routinely described as the “ancient” or “primitive” brain, scarily atavistic. There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.) René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical – a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls. It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography. Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow and green lighting up what looks like a black intracranial vacuum. In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image. The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space. And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn”, but popular magazines, science websites and books are frenzied consumers and hypers of these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The “This is your brain on” meme, it seems, is indefinitely extensible: Google results offer “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on, ad nauseam. I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless: there is beautiful and amazing science in how they work and what well-designed experiments can teach us. “One of my favourites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day). This to me demonstrates something important – that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report. With measures like that, we can begin to see how valuable it is to measure brain activity – it is giving us information that would otherwise be hidden from us.”

In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error. It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

Steven Poole is the author of the forthcoming book “You Aren’t What You Eat”, which will be published by Union Books in October.

This article was updated on 18 September 2012.

This article first appeared in the 10 September 2012 issue of the New Statesman, Autumn politics special

Show Hide image

The English Revolt

Brexit, Euroscepticism and the future of the United Kingdom.

English voters have led – some would say forced – the United Kingdom towards exit from the European Union. Was this an English revolt, the result of an ­upsurge over decades of a more assertive, perhaps resentful, sense of English identity? At one level, clearly so. Surveys indicate that individuals who most often describe themselves as “English”, and regions where this is common, were more inclined to vote Leave on 23 June. Some of these are poorer regions where marginalised people think that their voices are more likely to be heard in a national democracy than in an international trading bloc, and for whom patriotism is a source of self-respect. But it would only make sense to regard Leave as essentially an English reaction if discontent with the EU were confined to England, or specifically linked with feelings of Englishness.

In fact, negative opinions about the EU, and especially about its economic policy, are now more widespread in other countries than they are in England. Polls by the Pew Research Centre last month showed that disapproval of the EU was as high in Germany and the Netherlands as in Britain, and higher in France, Greece and Spain. Though aggravated by the 2007-2008 crash and enforced policies of austerity, a decline in support was clear earlier. France’s referendum of May 2005 gave a 55 per cent No to the proposed EU constitution after thorough debate, and a now familiar pattern emerged: enthusiastic Europeanism was confined to the wealthiest suburbs and quarters of Paris, and the only professional groups that strongly voted Yes were big business, the liberal professions and academics.

Going far beyond the atavistic and incoherent English revolt that some think they discern, our referendum result is partly a consequence of transnational political phenomena across the democratic world: the disaffection of citizens from conventional politics, shown by falling turnouts for elections, shrinking party membership and the rise of new, sometimes extreme political movements; as well as the simultaneous detachment of a professional political class from civil society, and its consequent retreat into a closed world of institutions.

The EU embodies these phenomena in uniquely acute form. In several cases its central bodies have opposed – or, if one prefers, have been forced to deny – democratically expressed wishes. In Greece and Italy, the EU has enforced changes of government and policy, and in Denmark, Ireland and the Netherlands it has pressed countries to ignore or reverse popular referendums. Its own representative body, the European Parliament, has gained neither power nor legitimacy. Crucial decisions are taken in secret, making the EU a hiding place for beleaguered politicians as well as a source of lavish financial reward for insiders. In the words of the historian John Gillingham, Europe is now being governed by neither its peoples nor its ideals, but by a bank board. This is not the “superstate” of Eurosceptic mythology. Though it drains power and legitimacy away from national governments, it is incapable of exercising power effectively itself, whether to cope with short-term emergencies such as an inflow of refugees, or to solve chronic failings such as the creation of mass unemployment in southern Europe. The result is paralysis, the inability either to extricate itself from failing institutions or to make them work.

If popular discontent with the EU continues to increase (and it is hard to see how it could not) sooner or later there will be some unmanageable political or social crisis. The response of too many supporters of the EU is to screw the lid down tighter, including now by promising to make life difficult for the United Kingdom, pour décourager les autres. This is the organisation – unpopular, unaccountable, secretive, often corrupt, and economically failing – from which our decision to depart apparently causes people to weep in the streets.

***

Why this decision? Why in Britain? The simplest and perhaps the best answer is that we have had a referendum. If France, Greece, Italy and some other countries had been given the same choice, they might well have made the same decision. But of course they have not been and will not be given such a choice, barring severe political crisis. This is most obviously because countries that have adopted the euro – even those such as Greece, for which the IMF has predicted high unemployment at least until the 2040s – have no clear way out.

I make this obvious point to emphasise that the immediate explanation of what has happened lies not only and not mainly in different feelings about the EU in Britain, but in different political opportunities and levels of fear. The contrasting votes in Scotland and Northern Ireland have particular explanations. Scottish nationalists – like their counterparts in Catalonia – see the EU as an indispensable support for independence. Northern Ireland sees the matter primarily as one affecting its own, still tense domestic politics and its relations with the Republic. In a European perspective, Scotland and Northern Ireland are the outliers, not England and Wales. Indeed, Scotland’s vote makes it stand out as one of the most pro-EU countries in Europe. If ever there is another referendum to see whether Scots prefer the EU to the UK, it will show whether this level of support for the EU is solid.

If England is exceptional, it is not in its disaffection from the EU, nor in the political divisions the referendum vote has exposed (if France, for instance, had such a vote, one could expect blood in the streets). Rather, its exceptional characteristic is its long-standing and settled scepticism about the European project in principle, greater than in any other EU country. Every ­member has a specific history that shapes its attitude to the theoretical idea of European integration. As John Gillingham, one of the most perceptive historians of the EU, describes its beginnings: “to the French [supranationalism was] a flag of convenience, to the Italians it was preferable (by definition) to government by Rome, to the Germans a welcome escape route, and to the Benelux nations a better choice than being dominated by powerful neighbours”.

Subsequently, for the eastern European states, it was a decisive step away from communist dictatorship, and for southern Europe a line drawn under a traumatic history of civil conflict. There is also a widespread belief, powerful though fanciful, that the EU prevents war between the European states. All these are important reasons why there remains considerable support for unification as an aspiration. But all these reasons are weaker, and some of them non-existent, in Britain, and especially in England. The simple reason for this is that Britain’s experience of the 20th century was far less traumatic. Moreover, during that time loyalty to the nation was not tarnished with fascism, but was rather the buttress of freedom and democracy. Conversely, the vision of a European “superstate” is seen less as a guarantee of peace and freedom, and rather as the latest in a five-century succession of would-be continental hegemons.

Given all this, an obvious question is why the United Kingdom ever joined in the European project in the first place. The answer helps to explain the country’s subsequent lack of enthusiasm. Its first response to the creation of the European Economic Community in 1957 was not to join, but to agree to establish a separate European Free Trade Association (Efta) in 1959 with Austria, Denmark, Norway, Portugal, Sweden and Switzerland; over the next three decades the seven founder members were joined by Finland, Iceland and Liechtenstein. This worked efficiently, cheaply and amicably, and, in time, Efta and the EEC would doubtless have created trading arrangements and systems of co-operation. But then the historic mistake was made. Efta was considered too small to provide the diplomatic clout craved by Whitehall at a time of severe post-imperial jitters. A cabinet committee warned in 1960 that “if we try to remain aloof from [the EEC] – bearing in mind that this will be happening simultaneously with the contraction of our overseas possessions – we shall run the risk of losing political influence and of ceasing to be able to exercise any real claim to be a world Power”.

Besides, Washington disliked Efta as a barrier to its aim of a federal Europe, and the Americans put heavy pressure on London to apply to accede to the Treaty of Rome, which it duly did in August 1961. “It is only full membership, with the possibility of controlling and dominating Europe,” wrote an optimistic British cabinet official, “that is really attractive.”

As the former US secretary of state Dean Acheson (one of the early backers of European integration) put it, in a now celebrated comment in December 1962: “Great Britain has lost an empire, and has not yet found a role. The attempt to play a separate power role . . . apart from Europe . . . based on a ‘special relationship’ with the United States [or] on being the head of a ‘Commonwealth’ . . . – this role is about played out.”

Acheson’s words long haunted British policymakers; perhaps they still do. And yet Britain remains one of the half-dozen strongest and most assertive states anywhere in the world, just as it has been for the past three centuries.

To fear of diplomatic marginalisation was added fear of economic decline. A government report in 1953 warned of “relegation of the UK to the second division”. Over the next 30 years there was a chorus of dismay about “the sick man of Europe”. Belief that EEC membership at any price was the only cure for Britain’s perceived economic ills became the orthodoxy in official circles: Britain was “the sinking Titanic”, and “Europe” the lifeboat.

So, on 1 January 1973 Britain formally entered the EEC with Denmark and Ireland. Other Efta members remained outside the Community – Switzerland and Norway for good. Harold Wilson’s 1975 referendum on whether to stay in the EEC in effect turned on Europe’s superior economic performance – which, though no one realised it at the time, had just ended.

This memory of apparent British economic weakness half a century ago still seems to weigh with older Remainers. Yet it was based on a fundamental misconception: that European growth rates were permanently higher than in a supposedly outdated and declining Britain. In reality, faster growth on the mainland in the 1950s and 1960s was due to one-off structural modernisation: the large agricultural workforce shifted into more productive industrial employment. From the mid-1940s to the early 1970s this gave several European countries “windfall growth” at a higher rate than was possible in Britain, which since the 19th century had had no large agricultural sector to convert. By the early 1970s, once that catching up was finished, European growth rates became the same as, or slightly lower than, Britain’s. When measured over the whole half-century from 1950 to 2000, Britain’s economic performance was no different from the ­European norm. By the mid-1980s, growth was faster than in France and Germany, and today Britain’s economic fundamentals remain strong.

Slower European growth lessened the perceived attractiveness of EU integration. In 1992, on Black Wednesday (16 September), hesitant participation in the European Exchange Rate Mechanism led to forced devaluations in Finland, Sweden, Italy, Spain and, finally, Britain. This was a huge political shock, though an economic boost.

Black Wednesday subsequently made it politically difficult for Britain to join the eurozone – allowing us a narrow escape, attributable more to circumstance than to policy, as vocal political and economic lobbies urged joining.

Moreover, Britain’s trade with the rest of the EU was declining as a proportion of its global activity: as Gordon Brown observed in 2005, 80 per cent of the UK’s potential trade lay outside the EU. The EU’s single market proved not very effective at increasing trade between its members even before the crash of 2007-2008, and prolonged austerity thereafter made it stagnant. Consequently, in the 2016 referendum campaign, more emphasis was placed on the dangers of leaving the single market than on the precise benefits of being in it.

But the days when Britain seemed the Titanic and Europe the lifeboat were long gone. On the contrary, Britain, with its fluid and largely unregulated labour market, had become the employer of last resort for the depressed countries of the eurozone. The sustained importation of workers since the 1990s had become, for a large part of Britain’s working class, the thing that most obviously outweighed whatever legal or economic advantages the EU might theoretically offer.

***

What galvanised the vote for Brexit, I think, was a core attachment to national democracy: the only sort of democracy that exists in Europe. That is what “getting our country back” essentially means. Granted, the slogan covers a multitude of concerns and wishes, some of them irreconcilable; but that is what pluralist democracy involves. Britain has long been the country most ­resistant to ceding greater powers to the EU: opinion polls in the lead-up to the referendum showed that only 6 per cent of people in the UK (compared to 34 per cent in France, for instance, and 26 per cent in Germany) favoured increased centralisation – a measure of the feebleness of Euro-federalism in Britain.

In contrast, two-thirds wanted powers returned from the EU to the British government, with a majority even among the relatively Europhile young. This suggests a much greater opposition to EU centralisation than shown by the 52 per cent vote for Brexit. The difference may be accounted for by the huge pressure put on the electorate during the campaign. Indeed, arithmetic suggests that half even of Remain voters oppose greater powers being given to the EU. Yet its supporters regard an increase of EU control over economic and financial decisions – the basics of politics – as indispensable if the EU is to survive, because of the strains inherent in the eurozone system. This stark contradiction between the decentralisation that many of the peoples of Europe – and above all the British – want to see and the greater centralisation that the EU as an institution needs is wilfully ignored by Remain supporters. Those who deplore the British electorate’s excessive attachment to self-government as some sort of impertinence should be clear (not least with themselves) about whether they believe that the age of democracy in Europe is over, and that great decisions should be left to professional politicians, bureaucracies and large corporations.

Some have dismissed the Leave vote as an incoherent and anarchic protest against “the establishment”, or as a xenophobic reaction against immigrants. Some of the media in Britain and abroad have been doing their best to propagate this view. Yet xenophobia has not been a significant feature of British politics since the 1960s, and certainly far less so than in many obedient EU member states, including France, Germany, Greece and the Netherlands. As for the anti-establishment “revolt”, this emerged when parts of the establishment began to put organised pressure on the electorate to vote Remain. Would-be opinion-formers have hardly covered themselves in glory in recent weeks. They have been out of touch and out of sympathy with opinion in the country, unwilling or unable to engage in reasoned debate, and resorting to collective proclamations of institutional authority which proved embarrassingly ineffective.

Worst of all, their main argument – whether they were artists, actors, film-makers, university vice-chancellors or prestigious learned societies – was one of unabashed self interest: the EU is our milch-cow, and hence you must feed it. This was a lamentable trahison des clercs. The reaction to the referendum result by some Remain partisans has been a monumental fit of pique that includes talking up economic crisis (which, as Keynes showed, is often self-fulfilling) and smearing 17 million Leave voters as xenophobes. This is both irresponsible and futile, and paves the way to political marginalisation.

The Queen’s call for “deeper, cooler consideration” is much needed. I recall Victor Hugo’s crushing invective against French elitists who rejected the verdict of democracy, when in 1850 he scorned “your ignorance of the country today, the antipathy that you feel for it and that it feels for you”.

This antipathy has reduced English politics to a temporary shambles. It is too early to say whether there will be some realignment of the fragments: One-Nation Toryism, Conservative neoliberalism, “new” and “old” Labour, the hibernating Liberal Democrats and Greens, the various nationalists and, of course, the unpredictable Ukip. When in the past there were similar crises – such as Labour’s rift over the national government in 1931, the Liberals’ split over Irish home rule in 1886, or the Tory fragmentation over the repeal of the Corn Laws in 1846 – the political balance was permanently changed.

***

Many Europeans fear that a breakdown of the EU could slide into a return to the horrors of the mid-20th century. Most people in Britain do not. The fundamental feature of the referendum campaign was that the majority was not frightened out of voting for Leave, either by political or by economic warnings. This is testimony to a significant change since the last referendum in 1975: most people no longer see Britain as a declining country dependent on the EU.

A Eurobarometer poll in 2013 showed that Britain was the only EU member state in which most citizens felt that they could face the future better outside the Union. Last month’s referendum reflected this view, which was not reversed by reiterated predictions of doom.

In retrospect, joining the Common Market in 1973 has proved an immense historic error. It is surely evident that we would not have been applying to join the EU in 2016 had we, like Norway or Switzerland, remained outside it. Yet the political and possibly economic costs of leaving it now are considerable. Even though discontent with the EU across much of Europe has recently overtaken sentiment in Britain, Britain is unique, in that, ever since the 1970s, its public has been consistently far less ­favourable to the idea of European integration than the electorate in any other country. Hence the various “opt-outs” and the critically important decision to remain outside the euro.

Now, by a great historic irony, we are heading towards the sort of associate status with the EU that we had in the late 1960s as the leading member of Efta, and which we could have kept. Instead, this country was led by its political elite, for reasons of prestige and because of exaggerated fears of national decline and marginalisation, into a vain attempt to be “at the heart of Europe”. It has been a dangerous illusion, born of the postwar declinist obsession, that Britain must “punch above its weight” both by following in the footsteps of the United States and by attaching itself to the EU.

For some, money, blood and control over our own policy were sacrifices worth making for a “seat at the top table”. This dual strategy has collapsed. In future we shall have to decide what is the appropriate and desirable role for Britain to play in the world, and we shall have to decide it for ourselves.

Robert Tombs is Professor of French History at Cambridge University. His most recent book is “The English and Their History” (Penguin)

This article first appeared in the 21 July 2016 issue of the New Statesman, The English Revolt