STUART KINLOUGH
Show Hide image

Head in the cloud

As we download ever more of our lives on to electronic devices, are we destroying our own internal memory?

I do not remember my husband’s tele­phone number, or my best friend’s address. I have forgotten my cousin’s birthday, my seven times table, the date my grandfather died. When I write, I keep at least a dozen internet tabs open to look up names and facts I should easily be able to recall. There are so many things I no longer know, simple things that matter to me in practical and personal ways, yet I usually get by just fine. Apart from the few occasions when my phone has run out of battery at a crucial moment, or the day I accidentally plunged it into hot tea, or the evening my handbag was stolen, it hasn’t seemed to matter that I have downloaded most of my working memory on to electronic devices. It feels a small inconvenience, given that I can access information equivalent to tens of billions of books on a gadget that fits into my back pocket.

For thousands of years, human beings have relied on stone tablets, scrolls, books or Post-it notes to remember things that their minds cannot retain, but there is something profoundly different about the way we remember and forget in the internet age. It is not only our memory of facts that is changing. Our episodic memory, the mind’s ability to relive past experiences – the surprising sting of an old humiliation revisited, the thrill and discomfort of a first kiss, those seemingly endless childhood summers – is affected, too. The average Briton now spends almost nine hours a day staring at their phone, computer or television, and when more of our lives are lived on screen, more of our memories will be formed there. We are recording more about ourselves and our experiences than ever before, and though in the past this required deliberate effort, such as sitting down to write a diary, or filing away a letter, or posing for a portrait, today this process can be effortless, even unintentional. Never before have people had access to such comprehensive and accurate personal histories – and so little power to rewrite them.

My internet history faithfully documents my desktop meanderings, even when I resurface from hours of browsing with little memory of where I have been or what I have read. My Gmail account now contains over 35,000 emails received since 2005. It has preserved the banal – long-expired special offers, obsolete arrangements for post-work drinks – alongside the life-changing. Loves and break-ups are chronicled here; jobs, births and weddings are announced; deaths are grieved. My Facebook profile page has developed into a crowdsourced, if assiduously edited, photo album of my social life over the past decade. My phone is a museum of quick-fire text exchanges. With a few clicks, I can retrieve, in mind-numbing detail, information about my previous movements, thoughts and feelings. So could someone else. Even my most private digital memories are not mine alone. They have become data to be restructured, repackaged, aggregated, copied, deleted, monetised or sold by internet firms. Our digital memories extend far beyond our reach.

In the late 1990s the philosopher David Chalmers coined the term “the extended mind” to describe how when we use pen and paper, calculators, or laptops to help us think or remember, these external objects are incorporated into our cognitive processes. “The technology we use becomes part of our minds, extending our minds and indeed our selves into the world,” Chalmers said in a 2011 Ted talk. Our iPhones have not been physically implanted into our brains, he explained, but it’s as if they have been. There’s a big difference between offloading memory on to a notepad and doing it on to a smartphone. One is a passive receptacle, the other is active. A notebook won’t reorganise the information you give it or ping you an alert; its layout and functions won’t change overnight; its contents aren’t part-owned by the stationery firm that made it. The more we extend our minds online, the harder it is becoming to keep control of our digital pasts, or to tell where our memories begin or end. And, while society’s collective memory is expanding at an astonishing rate, our internal, individual ones are shrinking.

***

Our brains are lazy; we are reluctant to remember things when we can in effect delegate the task to someone or something else. You can observe this by listening to couples, who often consult one another’s memories: “What was the name of that nice Chinese restaurant we went to the other day?” Subconsciously, partners distribute responsibility for remembering information according to each other’s strengths. I ask my husband for directions, he consults me on people’s names.

In one study conducted in 1991, psychologists assigned a series of memory exercises to pairs of students, some of whom had been dating for at least three months and some of whom did not know one another. The dating couples remembered more than the non-dating pairs. They also remembered more unique information; when a fact fell into their partner’s area of expertise, they were more likely to forget it.

In a similar way, when we know that a computer can remember something for us we are less likely to remember it ourselves. For a study published by the journal Science in 1991, people were asked to type some trivia facts into a computer. Those who believed the facts would be saved at the end of the experiment remembered less than those who thought they would be deleted – even when they were explicitly asked to memorise them. In an era when technology is doing ever more remembering, it is unsurprising that we are more inclined to forget.

It is sometimes suggested that in time the worry that the internet is making us forgetful will sound as silly as early fears that books would do the same. But the internet is not an incremental step in the progression of written culture, it is revolutionising the way we consume information. When you pull an encyclopaedia down from a library shelf, it is obvious that you are retrieving a fact you have forgotten, or never knew. Google is so fast and easy to use that we can forget we have consulted it at all: we are at risk of confusing the internet’s memory with our own. A Harvard University project in 2013 found that when people were allowed to use Google to check their answers to trivia questions they rated their own intelligence and memories more highly – even if they were given artificially low test results. Students usually believed more often that Google was confirming a fact they already knew, rather than providing them with new information.

This changed when Adrian Ward, now an assistant professor at the University of Austin, who designed the study as part of his PhD research, mimicked a slow internet connection so that students were forced to wait 25 seconds to read the answer to a Google query. The delay, he noted, stripped them of the “feeling of knowing” because they became more aware that they were consulting an external source. In the internet age, Ward writes, people “may offload more and more information while losing sight of the distinction between information stored in their minds and information stored online”.

By blurring the distinction between our personal and our digital memories, modern technology could encourage intellectual complacency, making people less curious about new information because they feel they already know it, and less likely to pay attention to detail because our computers are remembering it. What if the same could be said for our own lives: are we less attentive to our experiences because we know that computers will record them for us?

An experiment by the American psychologist Linda Henkel suggests this could be the case; she has found that when people take photographs at museums they are more likely to forget details of what they have seen. To some extent, we’re all tourists exploring the world from behind a camera, too distracted by our digital memories to inhabit our analogue lives fully. Relying on computers to remember telephone numbers or trivia does not seem to deprive our internal memories of too much – provided you can remember where you’ve stored it, factual information is fairly straightforward to retrieve. Yet a digital memory is a poor substitute for the richness of a personal experience revisited, and our autobiographical memories cannot be “retrieved” by opening the relevant online file.

Our relationship with the past is capricious. Sometimes an old photograph can appear completely unfamiliar, while at other times the faintest hint – the smell of an ex-lover’s perfume on a crowded Tube carriage – can induce overwhelming nostalgia. Remembering is in part a feeling, of recognition, of having been there, of reinhabiting a former self. This feeling is misleading; we often imagine memories offer an authentic insight into our past. They do not.

Memory is closely linked to self-identity, but it is a poor personal record. Remembering is a creative act. It is closely linked to imagining. When people suffer from dementia they are often robbed not only of the past but also of the future; without memory it is hard to construct an idea of future events. We often mistakenly convert our imaginings into memories – scientists call the process “imagination inflation”. This puts biological memories at odds with digital ones. While memories stored online can be retrieved intact, our internal memories are constantly changing and evolving. Each time we relive a memory, we reconfigure it to suit our present needs and world-view. In his book Pieces of Light, an exploration of the new science of memory, the neuroscientist Charles Fernyhough compares the construction of memory to storytelling. To impose meaning on to our chaotic, complex lives we need to know which sections to abridge and which details can be ignored. “We are all natural born storytellers. We are constantly editing and remaking our memory stories as our knowledge and emotions change. They may be fictions, but they are our fictions,” Fernyhough writes.

We do not write these stories alone. The human mind is suggestible. In 2013, scientists at MIT made international headlines when they said they had successfully implanted a false memory into a mouse using a form of light stimulation, but human beings implant false memories into each other all the time, using more low-tech methods. Friends and family members are forever distorting one another’s memories. I remember distinctly being teased for my Dutch accent at school and indignantly telling my mother when I arrived home that, “It’s pronounced one, two, three. Not one, two, tree.” My brother is sure it was him. The anecdote is tightly woven into the story of our pasts, but one of us must be wrong. When we record our personal memories online we open up new possibilities for their verification but we also create different opportunities for their distortion. In subtle ways, internet firms are manipulating our digital memories all the time – and we are often dangerously unaware of it.

***

Facebook occasionally gives me a reminder of Mahmoud Tlissy, the caretaker at my former office in Libya who died quietly of pancreatic cancer in 2011 while the civil war was raging. Every so often he sends me a picture of a multicoloured heart via a free app that outlived him. Mahmoud was a kind man with a sardonic sense of humour, a deep smoker’s laugh and a fondness for recounting his wild days as a student in Prague. I am always pleased to be reminded of him, but I feel uncomfortable because I doubt he would have chosen such a naff way to communicate with me after death. Our digital lives will survive us, sending out e-hearts and populating databases long after we have dropped off the census. When we deposit our life memories online, they start to develop lives of their own.

Those who want to limit the extent to which their lives are recorded digitally are swimming against the tide. Internet firms have a commercial interest in encouraging us not only to offload more personal information online, but also to use digital technology to reflect on our lives. Take Face­book, which was developed as a means of communicating but is becoming a tool for remembering and memorialising, too. The first Facebook users, who were university students in 2004, are mostly in their thirties now. Their graduations, first jobs, first loves, marriages and first children are likely to be recorded on the site; friends who have died young are likely to be mourned on it. The website understands that nostalgia is a powerful marketing tool, and so it has released gimmicky tools, such as automated videos, to help people “look back”.

These new online forms of remembrance are becoming popular. On Instagram and Twitter it is common for users to post sentimental old snaps under the hashtag #tbt, which stands for “Throwback Thursday”. Every day, seven million people check Timehop, an app that says it “helps you see the best moments of your past” by showing you old tweets, photos and online messages. Such tools are presented as a way of enriching our ability to relive the past but they are limiting. We can use them to tell stories about our lives, but the pace and structure of the narrative is defined for us. Remembering is an imaginative act, but internet firms are selling nostalgia by algorithm – and we’re buying it.

At their most ambitious, tech companies are offering the possibility of objective and complete insight into our pasts. In the future, “digital memories” could “[enhance] personal reflection in much the same way as the internet has aided scientific investigations”, the computer scientists Gordon Bell and Jim Gemmell wrote in the magazine Scientific American in 2006. The assumption is that our complex, emotional autobiographic memories can be captured as data to be ordered, quantified and analysed – and that computer programs could make better sense of them than our own, flawed brains. The pair have been collaborating on a Microsoft Research “life-logging” project since 2001, in which Bell logs everything he has said, written, seen and heard into a specially designed database.

Bell understood that the greatest challenge is finding a way to make digital archives usable. Without a program to help us extract information, digital memories are virtually useless: imagine trying to retrieve a telephone number from a month’s worth of continuous video footage. In our increasingly life-logged futures, we will all depend on powerful computer programs to index, analyse, repackage and retrieve our digital memories for us. The act of remembering will become automated. We will no longer make our “own fictions”.

This might sound like a distant sci-fi fantasy, but we are a long way there. Billions of people share their news and views by email or on social media daily, and unwittingly leave digital trails as they browse the web. The use of tracking devices to measure and record sleep, diet, exercise patterns, health and even mood is increasing. In the future, these comprehensive databases could prove very useful. When you go to the doctor, you might be able to provide details of your precise diet, exercise and sleep patterns. When a relationship breaks down you could be left with many gigabytes of digital memory to explore and make sense of. Did you really ­always put him down? Should you have broken up four years ago? In a few years’ time there could be an app for that.

Our reliance on digital memories is self-perpetuating: the more we depend on computer memories to provide us with detailed personal data, the more inadequate our own minds seem. Yet the fallibility of the human memory isn’t a design flaw, it is one of its best features. Recently, I typed the name of an ex-boyfriend into my Gmail search bar. This wasn’t like opening a box of old letters. For a start, I could access both sides of our email correspondence. Second, I could browse dozens of G-chats, instant messaging conversations so mundane and spontaneous that reading them can feel more like eavesdropping on a former self, or a stranger. The messages surprised me. I had remembered the relationship as short-lived and volatile but lacking any depth of feeling. So why had I sent those long, late-night emails? And what could explain his shorter, no less dramatic replies, “Will u ever speak to me again? You will ignore this I suspect but I love you.” Did he love me? Was I really so hurt? I barely recognise myself as the author of my messages; the feelings seem to belong to someone else.

My digital archives will offer a very different narrative from the half-truths and lies I tell myself, but I am more at home with my fictions. The “me” at the centre of my own memories is constantly evolving, but my digital identity is frozen in time. I feel a different person now; my computer suggests otherwise. Practically, this can pose problems (many of us are in possession of teenage social media posts we hope will never be made public) and psychologically it matters, too. To a greater or lesser extent, we all want to shed our former selves – but digital memories keep us firmly connected to our past. Forgetting and misremembering is a source of freedom: the freedom to reinvent oneself, to move on, to rewrite our stories. It means that old wounds need not hurt for ever, that love can be allowed to fade, that people can change.

With every passing year, we are shackling ourselves more tightly to our digital legacies, and relying more heavily on computer programs to narrate our personal histories for us. It is becoming ever harder to escape the past, or remake the future. Years from now, our digitally enhanced memories could allow us to have near-perfect recall, but who would want to live with their head in the cloud?

Sophie McBain is an NS contributing writer. This article was a runner-up in the 2015 Bodley Head FT Essay Prize

Sophie McBain is a freelance writer based in Cairo. She was previously an assistant editor at the New Statesman.

This article first appeared in the 18 February 2016 issue of the New Statesman, A storm is coming

Show Hide image

The English Revolt

Brexit, Euroscepticism and the future of the United Kingdom.

English voters have led – some would say forced – the United Kingdom towards exit from the European Union. Was this an English revolt, the result of an ­upsurge over decades of a more assertive, perhaps resentful, sense of English identity? At one level, clearly so. Surveys indicate that individuals who most often describe themselves as “English”, and regions where this is common, were more inclined to vote Leave on 23 June. Some of these are poorer regions where marginalised people think that their voices are more likely to be heard in a national democracy than in an international trading bloc, and for whom patriotism is a source of self-respect. But it would only make sense to regard Leave as essentially an English reaction if discontent with the EU were confined to England, or specifically linked with feelings of Englishness.

In fact, negative opinions about the EU, and especially about its economic policy, are now more widespread in other countries than they are in England. Polls by the Pew Research Centre last month showed that disapproval of the EU was as high in Germany and the Netherlands as in Britain, and higher in France, Greece and Spain. Though aggravated by the 2007-2008 crash and enforced policies of austerity, a decline in support was clear earlier. France’s referendum of May 2005 gave a 55 per cent No to the proposed EU constitution after thorough debate, and a now familiar pattern emerged: enthusiastic Europeanism was confined to the wealthiest suburbs and quarters of Paris, and the only professional groups that strongly voted Yes were big business, the liberal professions and academics.

Going far beyond the atavistic and incoherent English revolt that some think they discern, our referendum result is partly a consequence of transnational political phenomena across the democratic world: the disaffection of citizens from conventional politics, shown by falling turnouts for elections, shrinking party membership and the rise of new, sometimes extreme political movements; as well as the simultaneous detachment of a professional political class from civil society, and its consequent retreat into a closed world of institutions.

The EU embodies these phenomena in uniquely acute form. In several cases its central bodies have opposed – or, if one prefers, have been forced to deny – democratically expressed wishes. In Greece and Italy, the EU has enforced changes of government and policy, and in Denmark, Ireland and the Netherlands it has pressed countries to ignore or reverse popular referendums. Its own representative body, the European Parliament, has gained neither power nor legitimacy. Crucial decisions are taken in secret, making the EU a hiding place for beleaguered politicians as well as a source of lavish financial reward for insiders. In the words of the historian John Gillingham, Europe is now being governed by neither its peoples nor its ideals, but by a bank board. This is not the “superstate” of Eurosceptic mythology. Though it drains power and legitimacy away from national governments, it is incapable of exercising power effectively itself, whether to cope with short-term emergencies such as an inflow of refugees, or to solve chronic failings such as the creation of mass unemployment in southern Europe. The result is paralysis, the inability either to extricate itself from failing institutions or to make them work.

If popular discontent with the EU continues to increase (and it is hard to see how it could not) sooner or later there will be some unmanageable political or social crisis. The response of too many supporters of the EU is to screw the lid down tighter, including now by promising to make life difficult for the United Kingdom, pour décourager les autres. This is the organisation – unpopular, unaccountable, secretive, often corrupt, and economically failing – from which our decision to depart apparently causes people to weep in the streets.

***

Why this decision? Why in Britain? The simplest and perhaps the best answer is that we have had a referendum. If France, Greece, Italy and some other countries had been given the same choice, they might well have made the same decision. But of course they have not been and will not be given such a choice, barring severe political crisis. This is most obviously because countries that have adopted the euro – even those such as Greece, for which the IMF has predicted high unemployment at least until the 2040s – have no clear way out.

I make this obvious point to emphasise that the immediate explanation of what has happened lies not only and not mainly in different feelings about the EU in Britain, but in different political opportunities and levels of fear. The contrasting votes in Scotland and Northern Ireland have particular explanations. Scottish nationalists – like their counterparts in Catalonia – see the EU as an indispensable support for independence. Northern Ireland sees the matter primarily as one affecting its own, still tense domestic politics and its relations with the Republic. In a European perspective, Scotland and Northern Ireland are the outliers, not England and Wales. Indeed, Scotland’s vote makes it stand out as one of the most pro-EU countries in Europe. If ever there is another referendum to see whether Scots prefer the EU to the UK, it will show whether this level of support for the EU is solid.

If England is exceptional, it is not in its disaffection from the EU, nor in the political divisions the referendum vote has exposed (if France, for instance, had such a vote, one could expect blood in the streets). Rather, its exceptional characteristic is its long-standing and settled scepticism about the European project in principle, greater than in any other EU country. Every ­member has a specific history that shapes its attitude to the theoretical idea of European integration. As John Gillingham, one of the most perceptive historians of the EU, describes its beginnings: “to the French [supranationalism was] a flag of convenience, to the Italians it was preferable (by definition) to government by Rome, to the Germans a welcome escape route, and to the Benelux nations a better choice than being dominated by powerful neighbours”.

Subsequently, for the eastern European states, it was a decisive step away from communist dictatorship, and for southern Europe a line drawn under a traumatic history of civil conflict. There is also a widespread belief, powerful though fanciful, that the EU prevents war between the European states. All these are important reasons why there remains considerable support for unification as an aspiration. But all these reasons are weaker, and some of them non-existent, in Britain, and especially in England. The simple reason for this is that Britain’s experience of the 20th century was far less traumatic. Moreover, during that time loyalty to the nation was not tarnished with fascism, but was rather the buttress of freedom and democracy. Conversely, the vision of a European “superstate” is seen less as a guarantee of peace and freedom, and rather as the latest in a five-century succession of would-be continental hegemons.

Given all this, an obvious question is why the United Kingdom ever joined in the European project in the first place. The answer helps to explain the country’s subsequent lack of enthusiasm. Its first response to the creation of the European Economic Community in 1957 was not to join, but to agree to establish a separate European Free Trade Association (Efta) in 1959 with Austria, Denmark, Norway, Portugal, Sweden and Switzerland; over the next three decades the seven founder members were joined by Finland, Iceland and Liechtenstein. This worked efficiently, cheaply and amicably, and, in time, Efta and the EEC would doubtless have created trading arrangements and systems of co-operation. But then the historic mistake was made. Efta was considered too small to provide the diplomatic clout craved by Whitehall at a time of severe post-imperial jitters. A cabinet committee warned in 1960 that “if we try to remain aloof from [the EEC] – bearing in mind that this will be happening simultaneously with the contraction of our overseas possessions – we shall run the risk of losing political influence and of ceasing to be able to exercise any real claim to be a world Power”.

Besides, Washington disliked Efta as a barrier to its aim of a federal Europe, and the Americans put heavy pressure on London to apply to accede to the Treaty of Rome, which it duly did in August 1961. “It is only full membership, with the possibility of controlling and dominating Europe,” wrote an optimistic British cabinet official, “that is really attractive.”

As the former US secretary of state Dean Acheson (one of the early backers of European integration) put it, in a now celebrated comment in December 1962: “Great Britain has lost an empire, and has not yet found a role. The attempt to play a separate power role . . . apart from Europe . . . based on a ‘special relationship’ with the United States [or] on being the head of a ‘Commonwealth’ . . . – this role is about played out.”

Acheson’s words long haunted British policymakers; perhaps they still do. And yet Britain remains one of the half-dozen strongest and most assertive states anywhere in the world, just as it has been for the past three centuries.

To fear of diplomatic marginalisation was added fear of economic decline. A government report in 1953 warned of “relegation of the UK to the second division”. Over the next 30 years there was a chorus of dismay about “the sick man of Europe”. Belief that EEC membership at any price was the only cure for Britain’s perceived economic ills became the orthodoxy in official circles: Britain was “the sinking Titanic”, and “Europe” the lifeboat.

So, on 1 January 1973 Britain formally entered the EEC with Denmark and Ireland. Other Efta members remained outside the Community – Switzerland and Norway for good. Harold Wilson’s 1975 referendum on whether to stay in the EEC in effect turned on Europe’s superior economic performance – which, though no one realised it at the time, had just ended.

This memory of apparent British economic weakness half a century ago still seems to weigh with older Remainers. Yet it was based on a fundamental misconception: that European growth rates were permanently higher than in a supposedly outdated and declining Britain. In reality, faster growth on the mainland in the 1950s and 1960s was due to one-off structural modernisation: the large agricultural workforce shifted into more productive industrial employment. From the mid-1940s to the early 1970s this gave several European countries “windfall growth” at a higher rate than was possible in Britain, which since the 19th century had had no large agricultural sector to convert. By the early 1970s, once that catching up was finished, European growth rates became the same as, or slightly lower than, Britain’s. When measured over the whole half-century from 1950 to 2000, Britain’s economic performance was no different from the ­European norm. By the mid-1980s, growth was faster than in France and Germany, and today Britain’s economic fundamentals remain strong.

Slower European growth lessened the perceived attractiveness of EU integration. In 1992, on Black Wednesday (16 September), hesitant participation in the European Exchange Rate Mechanism led to forced devaluations in Finland, Sweden, Italy, Spain and, finally, Britain. This was a huge political shock, though an economic boost.

Black Wednesday subsequently made it politically difficult for Britain to join the eurozone – allowing us a narrow escape, attributable more to circumstance than to policy, as vocal political and economic lobbies urged joining.

Moreover, Britain’s trade with the rest of the EU was declining as a proportion of its global activity: as Gordon Brown observed in 2005, 80 per cent of the UK’s potential trade lay outside the EU. The EU’s single market proved not very effective at increasing trade between its members even before the crash of 2007-2008, and prolonged austerity thereafter made it stagnant. Consequently, in the 2016 referendum campaign, more emphasis was placed on the dangers of leaving the single market than on the precise benefits of being in it.

But the days when Britain seemed the Titanic and Europe the lifeboat were long gone. On the contrary, Britain, with its fluid and largely unregulated labour market, had become the employer of last resort for the depressed countries of the eurozone. The sustained importation of workers since the 1990s had become, for a large part of Britain’s working class, the thing that most obviously outweighed whatever legal or economic advantages the EU might theoretically offer.

***

What galvanised the vote for Brexit, I think, was a core attachment to national democracy: the only sort of democracy that exists in Europe. That is what “getting our country back” essentially means. Granted, the slogan covers a multitude of concerns and wishes, some of them irreconcilable; but that is what pluralist democracy involves. Britain has long been the country most ­resistant to ceding greater powers to the EU: opinion polls in the lead-up to the referendum showed that only 6 per cent of people in the UK (compared to 34 per cent in France, for instance, and 26 per cent in Germany) favoured increased centralisation – a measure of the feebleness of Euro-federalism in Britain.

In contrast, two-thirds wanted powers returned from the EU to the British government, with a majority even among the relatively Europhile young. This suggests a much greater opposition to EU centralisation than shown by the 52 per cent vote for Brexit. The difference may be accounted for by the huge pressure put on the electorate during the campaign. Indeed, arithmetic suggests that half even of Remain voters oppose greater powers being given to the EU. Yet its supporters regard an increase of EU control over economic and financial decisions – the basics of politics – as indispensable if the EU is to survive, because of the strains inherent in the eurozone system. This stark contradiction between the decentralisation that many of the peoples of Europe – and above all the British – want to see and the greater centralisation that the EU as an institution needs is wilfully ignored by Remain supporters. Those who deplore the British electorate’s excessive attachment to self-government as some sort of impertinence should be clear (not least with themselves) about whether they believe that the age of democracy in Europe is over, and that great decisions should be left to professional politicians, bureaucracies and large corporations.

Some have dismissed the Leave vote as an incoherent and anarchic protest against “the establishment”, or as a xenophobic reaction against immigrants. Some of the media in Britain and abroad have been doing their best to propagate this view. Yet xenophobia has not been a significant feature of British politics since the 1960s, and certainly far less so than in many obedient EU member states, including France, Germany, Greece and the Netherlands. As for the anti-establishment “revolt”, this emerged when parts of the establishment began to put organised pressure on the electorate to vote Remain. Would-be opinion-formers have hardly covered themselves in glory in recent weeks. They have been out of touch and out of sympathy with opinion in the country, unwilling or unable to engage in reasoned debate, and resorting to collective proclamations of institutional authority which proved embarrassingly ineffective.

Worst of all, their main argument – whether they were artists, actors, film-makers, university vice-chancellors or prestigious learned societies – was one of unabashed self interest: the EU is our milch-cow, and hence you must feed it. This was a lamentable trahison des clercs. The reaction to the referendum result by some Remain partisans has been a monumental fit of pique that includes talking up economic crisis (which, as Keynes showed, is often self-fulfilling) and smearing 17 million Leave voters as xenophobes. This is both irresponsible and futile, and paves the way to political marginalisation.

The Queen’s call for “deeper, cooler consideration” is much needed. I recall Victor Hugo’s crushing invective against French elitists who rejected the verdict of democracy, when in 1850 he scorned “your ignorance of the country today, the antipathy that you feel for it and that it feels for you”.

This antipathy has reduced English politics to a temporary shambles. It is too early to say whether there will be some realignment of the fragments: One-Nation Toryism, Conservative neoliberalism, “new” and “old” Labour, the hibernating Liberal Democrats and Greens, the various nationalists and, of course, the unpredictable Ukip. When in the past there were similar crises – such as Labour’s rift over the national government in 1931, the Liberals’ split over Irish home rule in 1886, or the Tory fragmentation over the repeal of the Corn Laws in 1846 – the political balance was permanently changed.

***

Many Europeans fear that a breakdown of the EU could slide into a return to the horrors of the mid-20th century. Most people in Britain do not. The fundamental feature of the referendum campaign was that the majority was not frightened out of voting for Leave, either by political or by economic warnings. This is testimony to a significant change since the last referendum in 1975: most people no longer see Britain as a declining country dependent on the EU.

A Eurobarometer poll in 2013 showed that Britain was the only EU member state in which most citizens felt that they could face the future better outside the Union. Last month’s referendum reflected this view, which was not reversed by reiterated predictions of doom.

In retrospect, joining the Common Market in 1973 has proved an immense historic error. It is surely evident that we would not have been applying to join the EU in 2016 had we, like Norway or Switzerland, remained outside it. Yet the political and possibly economic costs of leaving it now are considerable. Even though discontent with the EU across much of Europe has recently overtaken sentiment in Britain, Britain is unique, in that, ever since the 1970s, its public has been consistently far less ­favourable to the idea of European integration than the electorate in any other country. Hence the various “opt-outs” and the critically important decision to remain outside the euro.

Now, by a great historic irony, we are heading towards the sort of associate status with the EU that we had in the late 1960s as the leading member of Efta, and which we could have kept. Instead, this country was led by its political elite, for reasons of prestige and because of exaggerated fears of national decline and marginalisation, into a vain attempt to be “at the heart of Europe”. It has been a dangerous illusion, born of the postwar declinist obsession, that Britain must “punch above its weight” both by following in the footsteps of the United States and by attaching itself to the EU.

For some, money, blood and control over our own policy were sacrifices worth making for a “seat at the top table”. This dual strategy has collapsed. In future we shall have to decide what is the appropriate and desirable role for Britain to play in the world, and we shall have to decide it for ourselves.

Robert Tombs is Professor of French History at Cambridge University. His most recent book is “The English and Their History” (Penguin)

This article first appeared in the 21 July 2016 issue of the New Statesman, The English Revolt