Crushed by the wheels of industry: critics increasingly see new tech as one of the free market's most dangerous tools of oppression. Image: Ikon Images
Show Hide image

The new Luddites: why former digital prophets are turning against tech

Neo-Luddism began to emerge in the postwar period. First after the emergence of nuclear weapons, and secondly when it became apparent new computer technologies had the power to change our lives completely.

Very few of us can be sure that our jobs will not, in the near future, be done by machines. We know about cars built by robots, cashpoints replacing bank tellers, ticket dispensers replacing train staff, self-service checkouts replacing supermarket staff, tele­phone operators replaced by “call trees”, and so on. But this is small stuff compared with what might happen next.

Nursing may be done by robots, delivery men replaced by drones, GPs replaced by artificially “intelligent” diagnosers and health-sensing skin patches, back-room grunt work in law offices done by clerical automatons and remote teaching conducted by computers. In fact, it is quite hard to think of a job that cannot be partly or fully automated. And technology is a classless wrecking ball – the old blue-collar jobs have been disappearing for years; now they are being followed by white-collar ones.

Ah, you may say, but human beings will always be better. This misses the point. It does not matter whether the new machines never achieve full human-like consciousness, or even real intelligence, they can almost certainly achieve just enough to do your job – not as well as you, perhaps, but much, much more cheaply. To modernise John Ruskin, “There is hardly anything in the world that some robot cannot make a little worse and sell a little cheaper, and the people who consider price only are this robot’s lawful prey.”

Inevitably, there will be social and political friction. The onset has been signalled by skirmishes such as the London Underground strikes over ticket-office staff redundancies caused by machine-readable Oyster cards, and by the rage of licensed taxi drivers at the arrival of online unlicensed car booking services such as Uber, Lyft and Sidecar.

This resentment is intensified by rising social inequality. Everybody now knows that neoliberalism did not deliver the promised “trickle-down” effect; rather, it delivered trickle-up, because, even since the recession began, almost all the fruits of growth have gone to the rich. Working- and middle-class incomes have flatlined or fallen. Now, it seems, the wealthy cyber-elites are creating machines to put the rest of us out of work entirely.

The effect of this is to undermine the central argument of those who hype the benefits of job replacement by machines. They say that new and better jobs will be created. They say this was always true in the past, so it will be true now. (This is the precise correlative of the neoliberals’ “rising tide floats all boats” argument.) But people now doubt the “new and better jobs” line trotted out – or barked – by the prophets of robotisation. The new jobs, if there are any, will more probably be serf-like attenders to the needs of the machine, burger-flippers to the robot classes.

Nevertheless, this future, too, is being sold in neoliberal terms. “I am sure,” wrote Mitch Free (sic) in a commentary for Forbes on 11 June, “it is really hard [to] see when your pay check is being directly impacted but the reality to any market disruption is that the market wants the new technology or business model more than they want what you offer, otherwise it would not get off the ground. The market always wins, you cannot stop it.”

Free was writing in response to what probably seemed to him a completely absurd development, a nightmarish impossibility – the return of Luddism. “Luddite” has, in the past few decades, been such a routine term of abuse for anybody questioning the march of the machines (I get it all the time) that most people assume that, like “fool”, “idiot” or “prat”, it can only ever be abusive. But, in truth, Luddism has always been proudly embraced by the few and, thanks to the present climate of machine mania and stagnating incomes, it is beginning to make a new kind of sense. From the angry Parisian taxi drivers who vandalised a car belonging to an Uber driver to a Luddite-sympathetic column by the Nobel laureate Paul Krugman in the New York Times, Luddism in practice and in theory is back on the streets.

Luddism derives its name from Ned Ludd, who is said to have smashed two “stocking frames” – knitting machines – in a fit of rage in 1779, but who may have been a fictional character. It became a movement, with Ludd as its Robin Hood, between 1811 and 1817 when English textile workers were threatened with unemployment by new technology, which the Luddites defined as “machinery hurtful to Commonality”. Mills were burned, machinery was smashed and the army was mobilised. At one time, according to Eric Hobsbawm, there were more soldiers fighting the Luddites than were fighting Napoleon in Spain. Parliament passed a bill making machine-smashing a capital offence, a move opposed by Byron, who wrote a song so seditious that it was not published until after his death: “. . . we/Will die fighting, or live free,/And down with all kings but King Ludd!”

Once the Luddites had been suppressed, the Industrial Revolution resumed its course and, over the ensuing two centuries, proved the most effective wealth-creating force ever devised by man. So it is easy to say the authorities were on the right side of history and the Luddites on the wrong one. But note that this is based on the assumption that individual sacrifice in the present – in the form of lost jobs and crafts – is necessary for the mechanised future. Even if this were true, there is a dangerous whiff of totalitarianism in the assumption.

Neo-Luddism began to emerge in the postwar period. First, the power of nuclear weapons made it clear to everybody that our machines could now put everybody out of work for ever by the simple expedient of killing them and, second, in the 1980s and 1990s it became apparent that new computer technologies had the power to change our lives completely.

Thomas Pynchon, in a brilliant essay for the New York Times in 1984 – he noted the resonance of the year – responded to the first new threat and, through literature, revitalised the idea of the machine as enemy. “So, in the science fiction of the Atomic Age and the cold war, we see the Luddite impulse to deny the machine taking a different direction. The hardware angle got de-emphasised in favour of more humanistic concerns – exotic cultural evolutions and social scenarios, paradoxes and games with space/time, wild philosophical questions – most of it sharing, as the critical literature has amply discussed, a definition of ‘human’ as particularly distinguished from ‘machine’.”

In 1992, Neil Postman, in his book Technopoly, rehabilitated the Luddites in response to the threat from computers: “The term ‘Luddite’ has come to mean an almost childish and certainly naive opposition to technology. But the historical Luddites were neither childish nor naive. They were people trying desperately to preserve whatever rights, privileges, laws and customs had given them justice in the older world-view.”

Underpinning such thoughts was the fear that there was a malign convergence – perhaps even a conspiracy – at work. In 1961, even President Eisenhower warned of the anti-democratic power of the “military-industrial complex”. In 1967 Lewis Mumford spoke presciently of the possibility of a “mega-machine” that would result from “the convergence of science, technics and political power”. Pynchon picked up the theme: “If our world survives, the next great challenge to watch out for will come – you heard it here first – when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy.”

The possibility is with us still in Silicon Valley’s earnest faith in the Singularity – the moment, possibly to come in 2045, when we build our last machine, a super-intelligent computer that will solve all our problems and enslave or kill or save us. Such things are true only to the extent to which they are believed – and, in the Valley, this is believed, widely.

Environmentalists were obvious allies of neo-Luddism – adding global warming as a third threat to the list – and globalism, with its tendency to destroy distinctively local and cherished ways of life, was an obvious enemy. In recent decades, writers such as Chellis Glendinning, Langdon Winner and Jerry Mander have elevated the entire package into a comprehensive rhetoric of dissent from the direction in which the world is going. Winner wrote of Luddism as an “epistemological technology”. He added: “The method of carefully and deliberately dismantling technologies, epistemological Luddism, if you will, is one way of recovering the buried substance upon which our civilisation rests. Once unearthed, that substance could again be scrutinised, criticised, and judged.”

It was all very exciting, but then another academic rained on all their parades. His name was Ted Kaczynski, although he is more widely known as the Unabomber. In the name of his own brand of neo-Luddism, Kaczynski’s bombs killed three people and injured many more in a campaign that ran from 1978-95. His 1995 manifesto, “Industrial Society and Its Future”, said: “The Industrial Revolution and its consequences have been a disaster for the human race,” and called for a global revolution against the conformity imposed by technology.

The lesson of the Unabomber was that radical dissent can become a form of psychosis and, in doing so, undermine the dissenters’ legitimate arguments. It is an old lesson and it is seldom learned. The British Dark Mountain Project (dark-mountain.net), for instance, is “a network of writers, artists and thinkers who have stopped believing the stories our civilisation tells itself”. They advocate “uncivilisation” in writing and art – an attempt “to stand outside the human bubble and see us as we are: highly evolved apes with an array of talents and abilities which we are unleashing without sufficient thought, control, compassion or intelligence”. This may be true, but uncivilising ourselves to express this truth threatens to create many more corpses than ever dreamed of by even the Unabomber.1

Obviously, if neo-Luddism is conceived of in psychotic or apocalyptic terms, it is of no use to anybody and could prove very dangerous. But if it is conceived of as a critical engagement with technology, it could be useful and essential. So far, this critical engagement has been limited for two reasons. First, there is the belief – it is actually a superstition – in progress as an inevitable and benign outcome of free-market economics. Second, there is the extraordinary power of the technology companies to hypnotise us with their gadgets. Since 1997 the first belief has found justification in a management theory that bizarrely, upon closer examination, turns out to be the mirror image of Luddism. That was the year in which Clayton Christensen published The Innovator’s Dilemma, judged by the Economist to be one of the most important business books ever written. Christensen launched the craze for “disruption”. Many other books followed and many management courses were infected. Jill Lepore reported in the New Yorker in June that “this fall, the University of Southern California is opening a new program: ‘The degree is in disruption,’ the university announced.” And back at Forbes it is announced with glee that we have gone beyond disruptive innovation into a new phase of “devastating innovation”.

It is all, as Lepore shows in her article, nonsense. Christensen’s idea was simply that innovation by established companies to satisfy customers would be undermined by the disruptive innovation of market newcomers. It was a new version of Henry Ford and Steve Jobs’s view that it was pointless asking customers what they want; the point was to show them what they wanted. It was nonsense because, Lepore says, it was only true for a few, carefully chosen case histories over very short time frames. The point was made even better by Christensen himself when, in 2007, he made the confident prediction that Apple’s new iPhone would fail.

Nevertheless, disruption still grips the business imagination, perhaps because it sounds so exciting. In Luddism you smash the employer’s machines; in disruption theory you smash the competitor’s. The extremity of disruptive theory provides an accidental justification for extreme Luddism. Yet still, technocratic propaganda routinely uses the vocabulary of disruption theory.

Meanwhile in the New York Times, Paul Krugman wrote a very neo-Luddite column that questioned the consoling belief that education would somehow solve the probem of the destruction of jobs by technology. “Today, however, a much darker picture of the effects of technology on labour is emerging. In this picture, highly educated workers are as likely as less educated workers to find themselves displaced and devalued, and pushing for more education may create as many problems as it solves.”

In other words – against all the education boosters from Tony Blair onwards – you can’t learn yourself into the future, because it is already owned by others, primarily the technocracy. But it is expert dissidents from within the technocracy who are more useful for moderate neo-Luddites. In 2000, Bill Joy, a co-founder of Sun Microsystems and a huge figure in computing history, broke ranks with an article for Wired entitled “Why the future doesn’t need us”. He saw that many of the dreams of Silicon Valley would either lead to, or deliberately include, termination of the human species. They still do – believers in the Singularity look forward to it as a moment when we will transcend our biological condition.

“Given the incredible power of these new technologies,” Joy wrote, “shouldn’t we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn’t we proceed with great caution?”

Finally, there is Jaron Lanier, one of the creators of virtual reality, who lost faith in the direction technology was taking when his beloved music industry was eviscerated by the destruction of jobs that followed the arrival of downloading. Why, he repeatedly asks in books such as You Are Not a Gadget, should we design machines that lower the quality of things? This wasn’t what the internet was supposed to do.

Moderate neo-Luddism involves critical scepticism about the claims by the makers of the new machines and even more critical scepticism about the societies – primarily Silicon Valley – from which these anti-human ideas spring. At least now there is a TV satirical comedy about the place – HBO’s Silicon Valley – which will spread the news that the technocracy consists of very strange people who are, indeed, capable of building “machinery hurtful to Commonality”. The running joke in the first episode was about the way the technocrats always claim to be working to make a better world. As if.

Luddite laughter is a start. But there’s a long way to go before the technology beast is tamed. For the moment, you still may lose your job to a machine; but at least you can go down feeling and thinking – computers can’t do either. 

@bryanappleyard

Update 11 September 11am:


1The New Statesman has published the following letter in response to this article:

Bryan Appleyard’s article on “the new Luddites” (above) gave a rather misleading picture of the Dark Mountain Project, which apparently represents “a form of psychosis” likely to “create more corpses than ever dreamed of by even the Unabomber”. In reality, we are a network of writers, artists and thinkers, centred on the Dark Mountain journal. We publish two books of new work every year, much of it involving exactly the kind of “critical engagement” with technology for which Appleyard calls.

According to the New York Times, a publication not noted for its homicidal or psychotic tendencies, Dark Mountain is “changing the environmental debate in Britain and the rest of Europe”. We won’t speculate about Appleyard’s mental health or criminal intentions, but we do hope that the editors of the NS require a higher standard of research from him in future.

Dougald Hine, Paul Kingsnorth
Directors
Dark Mountain Project

 

This article first appeared in the 20 August 2014 issue of the New Statesman, What the Beatles did for Britain

Show Hide image

The English Revolt

Brexit, Euroscepticism and the future of the United Kingdom.

English voters have led – some would say forced – the United Kingdom towards exit from the European Union. Was this an English revolt, the result of an ­upsurge over decades of a more assertive, perhaps resentful, sense of English identity? At one level, clearly so. Surveys indicate that individuals who most often describe themselves as “English”, and regions where this is common, were more inclined to vote Leave on 23 June. Some of these are poorer regions where marginalised people think that their voices are more likely to be heard in a national democracy than in an international trading bloc, and for whom patriotism is a source of self-respect. But it would only make sense to regard Leave as essentially an English reaction if discontent with the EU were confined to England, or specifically linked with feelings of Englishness.

In fact, negative opinions about the EU, and especially about its economic policy, are now more widespread in other countries than they are in England. Polls by the Pew Research Centre last month showed that disapproval of the EU was as high in Germany and the Netherlands as in Britain, and higher in France, Greece and Spain. Though aggravated by the 2007-2008 crash and enforced policies of austerity, a decline in support was clear earlier. France’s referendum of May 2005 gave a 55 per cent No to the proposed EU constitution after thorough debate, and a now familiar pattern emerged: enthusiastic Europeanism was confined to the wealthiest suburbs and quarters of Paris, and the only professional groups that strongly voted Yes were big business, the liberal professions and academics.

Going far beyond the atavistic and incoherent English revolt that some think they discern, our referendum result is partly a consequence of transnational political phenomena across the democratic world: the disaffection of citizens from conventional politics, shown by falling turnouts for elections, shrinking party membership and the rise of new, sometimes extreme political movements; as well as the simultaneous detachment of a professional political class from civil society, and its consequent retreat into a closed world of institutions.

The EU embodies these phenomena in uniquely acute form. In several cases its central bodies have opposed – or, if one prefers, have been forced to deny – democratically expressed wishes. In Greece and Italy, the EU has enforced changes of government and policy, and in Denmark, Ireland and the Netherlands it has pressed countries to ignore or reverse popular referendums. Its own representative body, the European Parliament, has gained neither power nor legitimacy. Crucial decisions are taken in secret, making the EU a hiding place for beleaguered politicians as well as a source of lavish financial reward for insiders. In the words of the historian John Gillingham, Europe is now being governed by neither its peoples nor its ideals, but by a bank board. This is not the “superstate” of Eurosceptic mythology. Though it drains power and legitimacy away from national governments, it is incapable of exercising power effectively itself, whether to cope with short-term emergencies such as an inflow of refugees, or to solve chronic failings such as the creation of mass unemployment in southern Europe. The result is paralysis, the inability either to extricate itself from failing institutions or to make them work.

If popular discontent with the EU continues to increase (and it is hard to see how it could not) sooner or later there will be some unmanageable political or social crisis. The response of too many supporters of the EU is to screw the lid down tighter, including now by promising to make life difficult for the United Kingdom, pour décourager les autres. This is the organisation – unpopular, unaccountable, secretive, often corrupt, and economically failing – from which our decision to depart apparently causes people to weep in the streets.

***

Why this decision? Why in Britain? The simplest and perhaps the best answer is that we have had a referendum. If France, Greece, Italy and some other countries had been given the same choice, they might well have made the same decision. But of course they have not been and will not be given such a choice, barring severe political crisis. This is most obviously because countries that have adopted the euro – even those such as Greece, for which the IMF has predicted high unemployment at least until the 2040s – have no clear way out.

I make this obvious point to emphasise that the immediate explanation of what has happened lies not only and not mainly in different feelings about the EU in Britain, but in different political opportunities and levels of fear. The contrasting votes in Scotland and Northern Ireland have particular explanations. Scottish nationalists – like their counterparts in Catalonia – see the EU as an indispensable support for independence. Northern Ireland sees the matter primarily as one affecting its own, still tense domestic politics and its relations with the Republic. In a European perspective, Scotland and Northern Ireland are the outliers, not England and Wales. Indeed, Scotland’s vote makes it stand out as one of the most pro-EU countries in Europe. If ever there is another referendum to see whether Scots prefer the EU to the UK, it will show whether this level of support for the EU is solid.

If England is exceptional, it is not in its disaffection from the EU, nor in the political divisions the referendum vote has exposed (if France, for instance, had such a vote, one could expect blood in the streets). Rather, its exceptional characteristic is its long-standing and settled scepticism about the European project in principle, greater than in any other EU country. Every ­member has a specific history that shapes its attitude to the theoretical idea of European integration. As John Gillingham, one of the most perceptive historians of the EU, describes its beginnings: “to the French [supranationalism was] a flag of convenience, to the Italians it was preferable (by definition) to government by Rome, to the Germans a welcome escape route, and to the Benelux nations a better choice than being dominated by powerful neighbours”.

Subsequently, for the eastern European states, it was a decisive step away from communist dictatorship, and for southern Europe a line drawn under a traumatic history of civil conflict. There is also a widespread belief, powerful though fanciful, that the EU prevents war between the European states. All these are important reasons why there remains considerable support for unification as an aspiration. But all these reasons are weaker, and some of them non-existent, in Britain, and especially in England. The simple reason for this is that Britain’s experience of the 20th century was far less traumatic. Moreover, during that time loyalty to the nation was not tarnished with fascism, but was rather the buttress of freedom and democracy. Conversely, the vision of a European “superstate” is seen less as a guarantee of peace and freedom, and rather as the latest in a five-century succession of would-be continental hegemons.

Given all this, an obvious question is why the United Kingdom ever joined in the European project in the first place. The answer helps to explain the country’s subsequent lack of enthusiasm. Its first response to the creation of the European Economic Community in 1957 was not to join, but to agree to establish a separate European Free Trade Association (Efta) in 1959 with Austria, Denmark, Norway, Portugal, Sweden and Switzerland; over the next three decades the seven founder members were joined by Finland, Iceland and Liechtenstein. This worked efficiently, cheaply and amicably, and, in time, Efta and the EEC would doubtless have created trading arrangements and systems of co-operation. But then the historic mistake was made. Efta was considered too small to provide the diplomatic clout craved by Whitehall at a time of severe post-imperial jitters. A cabinet committee warned in 1960 that “if we try to remain aloof from [the EEC] – bearing in mind that this will be happening simultaneously with the contraction of our overseas possessions – we shall run the risk of losing political influence and of ceasing to be able to exercise any real claim to be a world Power”.

Besides, Washington disliked Efta as a barrier to its aim of a federal Europe, and the Americans put heavy pressure on London to apply to accede to the Treaty of Rome, which it duly did in August 1961. “It is only full membership, with the possibility of controlling and dominating Europe,” wrote an optimistic British cabinet official, “that is really attractive.”

As the former US secretary of state Dean Acheson (one of the early backers of European integration) put it, in a now celebrated comment in December 1962: “Great Britain has lost an empire, and has not yet found a role. The attempt to play a separate power role . . . apart from Europe . . . based on a ‘special relationship’ with the United States [or] on being the head of a ‘Commonwealth’ . . . – this role is about played out.”

Acheson’s words long haunted British policymakers; perhaps they still do. And yet Britain remains one of the half-dozen strongest and most assertive states anywhere in the world, just as it has been for the past three centuries.

To fear of diplomatic marginalisation was added fear of economic decline. A government report in 1953 warned of “relegation of the UK to the second division”. Over the next 30 years there was a chorus of dismay about “the sick man of Europe”. Belief that EEC membership at any price was the only cure for Britain’s perceived economic ills became the orthodoxy in official circles: Britain was “the sinking Titanic”, and “Europe” the lifeboat.

So, on 1 January 1973 Britain formally entered the EEC with Denmark and Ireland. Other Efta members remained outside the Community – Switzerland and Norway for good. Harold Wilson’s 1975 referendum on whether to stay in the EEC in effect turned on Europe’s superior economic performance – which, though no one realised it at the time, had just ended.

This memory of apparent British economic weakness half a century ago still seems to weigh with older Remainers. Yet it was based on a fundamental misconception: that European growth rates were permanently higher than in a supposedly outdated and declining Britain. In reality, faster growth on the mainland in the 1950s and 1960s was due to one-off structural modernisation: the large agricultural workforce shifted into more productive industrial employment. From the mid-1940s to the early 1970s this gave several European countries “windfall growth” at a higher rate than was possible in Britain, which since the 19th century had had no large agricultural sector to convert. By the early 1970s, once that catching up was finished, European growth rates became the same as, or slightly lower than, Britain’s. When measured over the whole half-century from 1950 to 2000, Britain’s economic performance was no different from the ­European norm. By the mid-1980s, growth was faster than in France and Germany, and today Britain’s economic fundamentals remain strong.

Slower European growth lessened the perceived attractiveness of EU integration. In 1992, on Black Wednesday (16 September), hesitant participation in the European Exchange Rate Mechanism led to forced devaluations in Finland, Sweden, Italy, Spain and, finally, Britain. This was a huge political shock, though an economic boost.

Black Wednesday subsequently made it politically difficult for Britain to join the eurozone – allowing us a narrow escape, attributable more to circumstance than to policy, as vocal political and economic lobbies urged joining.

Moreover, Britain’s trade with the rest of the EU was declining as a proportion of its global activity: as Gordon Brown observed in 2005, 80 per cent of the UK’s potential trade lay outside the EU. The EU’s single market proved not very effective at increasing trade between its members even before the crash of 2007-2008, and prolonged austerity thereafter made it stagnant. Consequently, in the 2016 referendum campaign, more emphasis was placed on the dangers of leaving the single market than on the precise benefits of being in it.

But the days when Britain seemed the Titanic and Europe the lifeboat were long gone. On the contrary, Britain, with its fluid and largely unregulated labour market, had become the employer of last resort for the depressed countries of the eurozone. The sustained importation of workers since the 1990s had become, for a large part of Britain’s working class, the thing that most obviously outweighed whatever legal or economic advantages the EU might theoretically offer.

***

What galvanised the vote for Brexit, I think, was a core attachment to national democracy: the only sort of democracy that exists in Europe. That is what “getting our country back” essentially means. Granted, the slogan covers a multitude of concerns and wishes, some of them irreconcilable; but that is what pluralist democracy involves. Britain has long been the country most ­resistant to ceding greater powers to the EU: opinion polls in the lead-up to the referendum showed that only 6 per cent of people in the UK (compared to 34 per cent in France, for instance, and 26 per cent in Germany) favoured increased centralisation – a measure of the feebleness of Euro-federalism in Britain.

In contrast, two-thirds wanted powers returned from the EU to the British government, with a majority even among the relatively Europhile young. This suggests a much greater opposition to EU centralisation than shown by the 52 per cent vote for Brexit. The difference may be accounted for by the huge pressure put on the electorate during the campaign. Indeed, arithmetic suggests that half even of Remain voters oppose greater powers being given to the EU. Yet its supporters regard an increase of EU control over economic and financial decisions – the basics of politics – as indispensable if the EU is to survive, because of the strains inherent in the eurozone system. This stark contradiction between the decentralisation that many of the peoples of Europe – and above all the British – want to see and the greater centralisation that the EU as an institution needs is wilfully ignored by Remain supporters. Those who deplore the British electorate’s excessive attachment to self-government as some sort of impertinence should be clear (not least with themselves) about whether they believe that the age of democracy in Europe is over, and that great decisions should be left to professional politicians, bureaucracies and large corporations.

Some have dismissed the Leave vote as an incoherent and anarchic protest against “the establishment”, or as a xenophobic reaction against immigrants. Some of the media in Britain and abroad have been doing their best to propagate this view. Yet xenophobia has not been a significant feature of British politics since the 1960s, and certainly far less so than in many obedient EU member states, including France, Germany, Greece and the Netherlands. As for the anti-establishment “revolt”, this emerged when parts of the establishment began to put organised pressure on the electorate to vote Remain. Would-be opinion-formers have hardly covered themselves in glory in recent weeks. They have been out of touch and out of sympathy with opinion in the country, unwilling or unable to engage in reasoned debate, and resorting to collective proclamations of institutional authority which proved embarrassingly ineffective.

Worst of all, their main argument – whether they were artists, actors, film-makers, university vice-chancellors or prestigious learned societies – was one of unabashed self interest: the EU is our milch-cow, and hence you must feed it. This was a lamentable trahison des clercs. The reaction to the referendum result by some Remain partisans has been a monumental fit of pique that includes talking up economic crisis (which, as Keynes showed, is often self-fulfilling) and smearing 17 million Leave voters as xenophobes. This is both irresponsible and futile, and paves the way to political marginalisation.

The Queen’s call for “deeper, cooler consideration” is much needed. I recall Victor Hugo’s crushing invective against French elitists who rejected the verdict of democracy, when in 1850 he scorned “your ignorance of the country today, the antipathy that you feel for it and that it feels for you”.

This antipathy has reduced English politics to a temporary shambles. It is too early to say whether there will be some realignment of the fragments: One-Nation Toryism, Conservative neoliberalism, “new” and “old” Labour, the hibernating Liberal Democrats and Greens, the various nationalists and, of course, the unpredictable Ukip. When in the past there were similar crises – such as Labour’s rift over the national government in 1931, the Liberals’ split over Irish home rule in 1886, or the Tory fragmentation over the repeal of the Corn Laws in 1846 – the political balance was permanently changed.

***

Many Europeans fear that a breakdown of the EU could slide into a return to the horrors of the mid-20th century. Most people in Britain do not. The fundamental feature of the referendum campaign was that the majority was not frightened out of voting for Leave, either by political or by economic warnings. This is testimony to a significant change since the last referendum in 1975: most people no longer see Britain as a declining country dependent on the EU.

A Eurobarometer poll in 2013 showed that Britain was the only EU member state in which most citizens felt that they could face the future better outside the Union. Last month’s referendum reflected this view, which was not reversed by reiterated predictions of doom.

In retrospect, joining the Common Market in 1973 has proved an immense historic error. It is surely evident that we would not have been applying to join the EU in 2016 had we, like Norway or Switzerland, remained outside it. Yet the political and possibly economic costs of leaving it now are considerable. Even though discontent with the EU across much of Europe has recently overtaken sentiment in Britain, Britain is unique, in that, ever since the 1970s, its public has been consistently far less ­favourable to the idea of European integration than the electorate in any other country. Hence the various “opt-outs” and the critically important decision to remain outside the euro.

Now, by a great historic irony, we are heading towards the sort of associate status with the EU that we had in the late 1960s as the leading member of Efta, and which we could have kept. Instead, this country was led by its political elite, for reasons of prestige and because of exaggerated fears of national decline and marginalisation, into a vain attempt to be “at the heart of Europe”. It has been a dangerous illusion, born of the postwar declinist obsession, that Britain must “punch above its weight” both by following in the footsteps of the United States and by attaching itself to the EU.

For some, money, blood and control over our own policy were sacrifices worth making for a “seat at the top table”. This dual strategy has collapsed. In future we shall have to decide what is the appropriate and desirable role for Britain to play in the world, and we shall have to decide it for ourselves.

Robert Tombs is Professor of French History at Cambridge University. His most recent book is “The English and Their History” (Penguin)

This article first appeared in the 21 July 2016 issue of the New Statesman, The English Revolt