Treat with extreme caution

Homoeopathic medicine is founded on a bogus philosophy. Its continued use is a drain on NHS resource

Two years ago, a loose coalition of like-minded scientists wrote an open letter to chief executives of the National Health Service Trusts. The signatories simply stated that homoeopathy and other alternative therapies were unproven, and that the NHS should reserve its funds for treatments that had been shown to work. The letter marked an extraordinary downturn in the fortunes of homoeopathy in the UK over the following year, because the overwhelming majority of trusts either stopped sending patients to the four homoeopathic hospitals, or introduced measures to strictly limit referrals.

Consequently, the future of these hospitals is now in doubt. The Tunbridge Wells Homoeopathic Hospital is set to close next year and the Royal London Homoeopathic Hospital is likely to follow in its wake. Homoeo paths are now so worried about the collapse of their flagship hospitals that they are organising a march to deliver a petition to Downing Street on 22 June. Local campaign groups are being formed and patients are being urged to sign the petition.

Homoeopaths believe that the medical Establishment is crushing a valuable healing tradition that dates back more than two centuries and that still has much to offer patients. Homoeopaths are certainly passionate about the benefits of their treatment, but are their claims valid, or are they misguidedly promoting a bogus philosophy?

This is a question that I have been considering for the past two years, ever since I began co-authoring a book on the subject of alternative medicine with Professor Edzard Ernst. He was one of the signatories of the letter to the NHS trusts and is the world's first professor of complementary medicine. Before I present our conclusion, it is worth remembering why homoeo pathy has always existed beyond the borders of mainstream medicine.

Homoeopathy relies on two key principles, namely that like cures like, and that smaller doses deliver more powerful effects. In other words, if onions cause our eyes to stream, then a homoeopathic pill made from onion juice might be a potential cure for the eye irritation caused by hay fever. Crucially, the onion juice would need to be diluted repeatedly to produce the pill that can be administered to the patient, as homoeopaths believe that less is more.

Initially, this sounds attractive, and not dissimilar to the principle of vaccination, whereby a small amount of virus can be used to protect patients from viral infection. However, doctors use the principle of like cures like very selectively, whereas homoeopaths use it universally. Moreover, a vaccination always contains a measurable amount of active ingredient, whereas homoeopathic remedies are usually so dilute that they contain no active ingredient whatsoever.

A pill that contains no medicine is unlikely to be effective, but millions of patients swear by this treatment. From a scientific point of view, the obvious explanation is that any perceived benefit is purely a result of the placebo effect, because it is well established that any patient who believes in a remedy is likely to experience some improvement in their condition due to the psychological impact. Homoeopaths disagree, and claim that a "memory" of the homoeopathic ingredient has a profound physiological effect on the patient. So the key question is straightforward: is homoeopathy more than just a placebo treatment?

Fortunately, medical researchers have conducted more than 200 clinical trials to investigate the impact of homoeopathy on a whole range of conditions. Typically, one group of patients is given homoeopathic remedies and another group is given a known placebo, such as a sugar pill. Researchers then examine whether or not the homoeopathic group improves on average more than the placebo group. The overall conclusion from all this research is that homoeopathic remedies are indeed mere placebos.

In other words, their benefit is based on nothing more than wishful thinking. The latest and most definitive overview of the evidence was published in the Lancet in 2005 and was accompanied by an editorial entitled "The end of homoeopathy". It argued that ". . . doctors need to be bold and honest with their patients about homoeopathy's lack of benefit".

An unsound investment

However, even if homoeopathy is a placebo treatment, anybody working in health care will readily admit that the placebo effect can be a very powerful force for good. Therefore, it could be argued that homoeopaths should be allowed to flourish as they administer placebos that clearly appeal to patients. Despite the undoubted benefits of the placebo effect, however, there are numerous reasons why it is unjustifiable for the NHS to invest in homoeopathy.

First, it is important to recognise that money spent on homoeopathy means a lack of investment elsewhere in the NHS. It is estimated that the NHS spends £500m annually on alternative therapies, but instead of spending this money on unproven or disproven therapies it could be used to pay for 20,000 more nurses. Another way to appreciate the sum of money involved is to consider the recent refurbishment of the Royal Homoeopathic Hospital in London, which was completed in 2005 and cost £20m. The hospital is part of the University College London Hospitals NHS Foundation Trust, which contributed £10m to the refurbishment, even though it had to admit a deficit of £17.4m at the end of 2005. In other words, most of the overspend could have been avoided if the Trust had not spent so much money on refurbishing the spiritual home of homoeopathy.

Second, the placebo effect is real, but it can lull patients into a false sense of security by improving their sense of well-being without actually treating the underlying conditions. This might be all right for patients suffering from a cold or flu, which should clear up given time, but for more severe illnesses, homoeopathic treatment could lead to severe long-term problems. Because those who administer homoeopathic treatment are outside of conventional medicine and therefore largely unmonitored, it is impos sible to prove the damage caused by placebo. Never theless, there is plenty of anecdotal evidence to support this claim.

For example, in 2003 Professor Ernst was working with homoeopaths who were taking part in a study to see if they could treat asthma. Unknown to the professor or any of the other researchers, one of the homoeopaths had a brown spot on her arm, which was growing in size and changing in colour. Convinced that homoeopathy was genuinely effective, the homoeopath decided to treat it herself using her own remedies. Buoyed by the placebo effect, she continued her treatment for months, but the spot turned out to be a malignant melanoma. While she was still in the middle of treating asthma patients, the homoeopath died. Had she sought conventional treatment at an early stage, there would have been a 90 per cent chance that she would have survived for five years or more. By relying on homoeopathy, she had condemned herself to an inevitably early death.

The third problem is that anybody who is aware of the vast body of research and who still advises homoeopathy is misleading patients. In order to evoke the placebo effect, the patient has to be fooled into believing that homoeopathy is effective. In fact, bigger lies encourage bigger patient expectations and trigger bigger placebo effects, so exploiting the benefits of homoeopathy to the full would require homoeopaths to deliver the most fantastical justifications imaginable.

Over the past half-century, the trend has been towards a more open and honest relationship between doctor and patient, so homoeopaths who mislead patients flagrantly disregard ethical standards. Of course, many homoeopaths may be unaware of or may choose to disregard the vast body of scientific evidence against homoeo pathy, but arrogance and ignorance in health care are also unforgivable sins.

If it is justifiable for the manufacturers of homoeopathic remedies in effect to lie about the efficacy of their useless products in order to evoke a placebo benefit, then maybe the pharmaceutical companies could fairly argue that they ought to be allowed to sell sugar pills at high prices on the basis of the placebo effect as well. This would undermine the requirement for rigorous testing of drugs before they go on sale.

A fourth reason for spurning placebo-based medicines is that patients who use them for relatively mild conditions can later be led into dangerously inappropriate use of the same treatments. Imagine a patient with back pain who is referred to a homoeopath and who receives a moderate, short-term placebo effect. This might impress the patient, who then returns to the homoeopath for other advice. For example, it is known that homoeopaths offer alternatives to conventional vaccination - a 2002 survey of homoeopaths showed that only 3 per cent of them advised parents to give their baby the MMR vaccine. Hence, directing patients towards homoeo paths for back pain could encourage those patients not to have their children vaccinated against potentially dangerous diseases.

Killer cures

Such advice and treatment is irresponsible and dangerous. When I asked a young student to approach homoeopaths for advice on malaria prevention in 2006, ten out of ten homoeopaths were willing to sell their own remedies instead of telling the student to seek out expert advice and take the necessary drugs.

The student had explained that she would be spending ten weeks in West Africa; we had decided on this backstory because this region has the deadliest strain of malaria, which can kill within three days. Nevertheless, homoeopaths were willing to sell remedies that contained no active ingredient. Apparently, it was the memory of the ingredient that would protect the student, or, as one homoeopath put it: "The remedies should lower your susceptibility; because what they do is they make it so your energy - your living energy - doesn't have a kind of malaria-shaped hole in it. The malarial mosquitoes won't come along and fill that in. The remedies sort it out."

The homoeopathic industry likes to present itself as a caring, patient-centred alternative to conventional medicine, but in truth it offers disproven remedies and often makes scandalous and reckless claims. On World Aids Day 2007, the Society of Homoeopaths, which represents professional homoeopaths in the UK, organised an HIV/Aids symposium that promoted the outlandish ambitions of several speakers. For example, describing Harry van der Zee, editor of the International Journal for Classical Homoeo pathy, the society wrote: "Harry believes that, using the PC1 remedy, the Aids epidemic can be called to a halt, and that homoeopaths are the ones to do it."

There is one final reason for rejecting placebo-based medicines, perhaps the most important of all, which is that we do not actually need placebos to benefit from the placebo effect. A patient receiving proven treatments already receives the placebo effect, so to offer homoeopathy instead - which delivers only the placebo effect - would simply short-change the patient.

I do not expect that practising homoeopaths will accept any of my arguments above, because they are based on scientific evidence showing that homoeopathy is nothing more than a placebo. Even though this evidence is now indisputable, homoeopaths have, understandably, not shown any enthusiasm to acknowledge it.

For now, their campaign continues. Although it has not been updated for a while, the campaign website currently states that its petition has received only 382 signatures on paper, which means that there's a long way to go to reach the target of 250,000. But, of course, one of the central principles of homoeopathy is that less is more. Hence, in this case, a very small number of signatures may prove to be very effective. In fact, perhaps the Society of Homoeopaths should urge people to withdraw their names from the list, so that nobody at all signs the petition. Surely this would make it incredibly powerful and guaranteed to be effective.

"Trick or Treatment? Alternative Medicine on Trial" (Bantam Press, £16.99) by Simon Singh and Edzard Ernst is published on 21 April

Homoeopathy by numbers

3,000 registered homoeopaths in the UK

1 in 3 British people use alternative therapies such as homoeopathy

42% of GPs refer patients to homoeopaths

0 molecules of an active ingredient in a typical "30c" homoeopathic solution

$1m reward offered by James Randi for proof that homoeopathy works

This article first appeared in the 21 April 2008 issue of the New Statesman, Food crisis

PETER NICHOLLS/REUTERS
Show Hide image

David Cameron's fatal insouciance

Will future historians remember the former prime minister for anything more than his great Brexit bungle?

On 13 July 2016, after a premiership lasting six years and 63 days, David Cameron left Downing Street for the last time. On the tarmac outside the black door, with his wife and children at his side, he gave a characteristically cool and polished parting statement. Then he got in his car for the last journey to Buckingham Palace – the picture, as ever, of insouciant ease. As I was watching the television pictures of Cameron’s car gliding away, I remembered what he is supposed to have said some years earlier, when asked why he wanted to be prime minister. True or not, his answer perfectly captured the public image of the man: “Because I think I’d be rather good at it.”

A few moments later, a friend sent me a text message. It was just six words long: “He’s down there with Chamberlain now.”

At first I thought that was a bit harsh. People will probably always disagree about Cameron’s economic record, just as they do about Margaret Thatcher’s. But at the very least it was nowhere near as bad as some of his critics had predicted, and by some standards – jobs created, for instance – it was much better than many observers had expected. His government’s welfare and education policies have their critics, but it seems highly unlikely that people will still be talking about them in a few decades’ time. Similarly, although Britain’s intervention in Libya is unlikely to win high marks from historians, it never approached the disaster of Iraq in the public imagination.

Cameron will probably score highly for his introduction of gay marriage, and although there are many people who dislike him, polls suggested that most voters regarded him as a competent, cheerful and plausible occupant of the highest office in the land. To put it another way, from the day he entered 10 Downing Street until the moment he left, he always looked prime ministerial. It is true that he left office as a loser, humiliated by the EU referendum, and yet, on the day he departed, the polls had him comfortably ahead of his Labour opposite number. He was, in short, popular.
On the other hand, a lot of people liked Neville Chamberlain, too. Like Chamberlain, Cameron seems destined to be remembered for only one thing. When students answer exam questions about Chamberlain, it’s a safe bet that they aren’t writing about the Holidays with Pay Act 1938. And when students write about Cameron in the year 2066, they won’t be answering questions about intervention in Libya, or gay marriage. They will be writing about Brexit and the lost referendum.

It is, of course, conceivable, though surely very unlikely, that Brexit will be plain sailing. But it is very possible that it will be bitter, protracted and enormously expensive. Indeed, it is perfectly conceivable that by the tenth anniversary of the referendum, the United Kingdom could be reduced to an English and Welsh rump, struggling to come to terms with a punitive European trade deal and casting resentful glances at a newly independent Scotland. Of course the Brexiteers – Nigel Farage, Boris Johnson, Michael Gove, Daniel Hannan et al – would get most of the blame in the short run. But in the long run, would any of them really be remembered? Much more likely is that historians’ fingers would point at one man: Cameron, the leader of the Conservative and Unionist Party, the prime minister who gambled with his future and lost the Union. The book by “Cato” that destroyed Chamberlain’s reputation in July 1940 was entitled Guilty Men. How long would it be, I wonder, before somebody brought out a book about Cameron, entitled Guilty Man?

Naturally, all this may prove far too pessimistic. My own suspicion is that Brexit will turn out to be a typically European – or, if you prefer, a typically British – fudge. And if the past few weeks’ polls are anything to go by, Scottish independence remains far from certain. So, in a less apocalyptic scenario, how would posterity remember David Cameron? As a historic failure and “appalling bungler”, as one Guardian writer called him? Or as a “great prime minister”, as Theresa May claimed on the steps of No 10?

Neither. The answer, I think, is that it would not remember him at all.

***

The late Roy Jenkins, who – as Herbert Asquith’s biographer, Harold Wilson’s chancellor and Jim Callaghan’s rival – was passionately interested in such things, used to write of a “market” in prime ministerial futures. “Buy Attlee!” he might say. “Sell Macmillan!” But much of this strikes me as nonsense. For one thing, political reputations fluctuate much less than we think. Many people’s views of, say, Wilson, Thatcher and Blair have remained unchanged since the day they left office. Over time, reputations do not change so much as fade. Academics remember prime ministers; so do political anoraks and some politicians; but most people soon forget they ever existed. There are 53 past prime ministers of the United Kingdom, but who now remembers most of them? Outside the university common room, who cares about the Marquess of Rockingham, the Earl of Derby, Lord John Russell, or Arthur Balfour? For that matter, who cares about Asquith or Wilson? If you stopped people in the streets of Sunderland, how many of them would have heard of Stanley Baldwin or Harold Macmillan? And even if they had, how much would they ­really know about them?

In any case, what does it mean to be a success or a failure as prime minister? How on Earth can you measure Cameron’s achievements, or lack of them? We all have our favourites and our prejudices, but how do you turn that into something more dispassionate? To give a striking example, Margaret Thatcher never won more than 43.9 per cent of the vote, was roundly hated by much of the rest of the country and was burned in effigy when she died, long after her time in office had passed into history. Having come to power promising to revive the economy and get Britain working again, she contrived to send unemployment well over three million, presided over the collapse of much of British manufacturing and left office with the economy poised to plunge into yet another recession. So, in that sense, she looks a failure.

Yet at the same time she won three consecutive general elections, regained the Falklands from Argentina, pushed through bold reforms to Britain’s institutions and fundamentally recast the terms of political debate for a generation to come. In that sense, clearly she was a success. How do you reconcile those two positions? How can you possibly avoid yielding to personal prejudice? How, in fact, can you reach any vaguely objective verdict at all?

It is striking that, although we readily discuss politicians in terms of success and failure, we rarely think about what that means. In some walks of life, the standard for success seems obvious. Take the other “impossible job” that the tabloids love to compare with serving as prime minister: managing the England football team. You can measure a football manager’s success by trophies won, qualifications gained, even points accrued per game, just as you can judge a chief executive’s performance in terms of sales, profits and share values.

There is no equivalent for prime ministerial leadership. Election victories? That would make Clement Attlee a failure: he fought five elections and won only two. It would make Winston Churchill a failure, too: he fought three elections and won only one. Economic growth? Often that has very little to do with the man or woman at the top. Opinion polls? There’s more to success than popularity, surely. Wars? Really?

The ambiguity of the question has never stopped people trying. There is even a Wikipedia page devoted to “Historical rankings of Prime Ministers of the United Kingdom”, which incorporates two surveys of academics carried out by the University of Leeds, a BBC Radio 4 poll of Westminster commentators, a feature by BBC History Magazine and an online poll organised by Newsnight. By and large, there is a clear pattern. Among 20th-century leaders, there are four clear “successes” – Lloyd George, Churchill, Attlee and Thatcher – with the likes of Macmillan, Wilson and Heath scrapping for mid-table places. At the bottom, too, the same names come up again and again: Balfour, Chamberlain, Eden, Douglas-Home and Major. But some of these polls are quite old, dating back to the Blair years. My guess is that if they were conducted today, Major might rise a little, especially after the success of Team GB at the Olympics, and Gordon Brown might find himself becalmed somewhere towards the bottom.

***

So what makes the failures, well, failures? In two cases, the answer is simply electoral defeat. Both ­Arthur Balfour and John Major were doomed to failure from the moment they took office, precisely because they had been picked from within the governing party to replace strong, assertive and electorally successful leaders in Lord Salisbury and Margaret Thatcher, respectively. It’s true that Major unexpectedly won the 1992 election, but in both cases there was an atmosphere of fin de régime from the very beginning. Douglas-Home probably fits into this category, too, coming as he did at the fag end of 13 years of Conservative rule. Contrary to political mythology, he was in fact a perfectly competent prime minister, and came much closer to winning the 1964 election than many people had expected. But he wasn’t around for long and never really captured the public mood. It seems harsh merely to dismiss him as a failure, but politics is a harsh business.

That leaves two: Chamberlain and Eden. Undisputed failures, who presided over the greatest foreign policy calamities in our modern history. Nothing to say, then? Not so. Take Chamberlain first. More than any other individual in our modern history, he has become a byword for weakness, naivety and self-deluding folly.

Yet much of this picture is wrong. Chamberlain was not a weak or indecisive man. If anything, he was too strong: too stubborn, too self-confident. Today we remember him as a faintly ridiculous, backward-looking man, with his umbrella and wing collar. But many of his contemporaries saw him as a supremely modern administrator, a reforming minister of health and an authoritative chancellor who towered above his Conservative contemporaries. It was this impression of cool capability that secured Chamberlain the crown when Baldwin stepped down in 1937. Unfortunately, it was precisely his titanic self-belief, his unbreakable faith in his own competence, that also led him to overestimate his influence over Adolf Hitler. In other words, the very quality that people most admired – his stubborn confidence in his own ability – was precisely what doomed him.

In Chamberlain’s case, there is no doubt that he had lost much of his popular prestige by May 1940, when he stepped down as prime minister. Even though most of his own Conservative MPs still backed him – as most of Cameron’s MPs still backed him after the vote in favour of Brexit – the evidence of Mass Observation and other surveys suggests that he had lost support in the country at large, and his reputation soon dwindled to its present calamitous level.

The case of the other notable failure, Anthony Eden, is different. When he left office after the Suez crisis in January 1957, it was not because the public had deserted him, but because his health had collapsed. Surprising as it may seem, Eden was more popular after Suez than he had been before it. In other words, if the British people had had their way, Eden would probably have continued as prime minister. They did not see him as a failure at all.

Like Chamberlain, Eden is now generally regarded as a dud. Again, this may be a bit unfair. As his biographers have pointed out, he was a sick and exhausted man when he took office – the result of two disastrously botched operations on his gall bladder – and relied on a cocktail of painkillers and stimulants. Yet, to the voters who handed him a handsome general election victory in 1955, Eden seemed to have all the qualities to become an enormously successful prime minister: good looks, brains, charm and experience, like a slicker, cleverer and more seasoned version of Cameron. In particular, he was thought to have proved his courage in the late 1930s, when he had resigned as foreign secretary in protest at the appeasement of Benito Mussolini before becoming one of Churchill’s chief lieutenants.

Yet it was precisely Eden’s great asset – his reputation as a man who had opposed appeasement and stood up to the dictators – that became his weakness. In effect, he became trapped by his own legend. When the Egyptian dictator Gamal Abdel Nasser nationalised the Suez Canal in July 1956, Eden seemed unable to view it as anything other than a replay of the fascist land-grabs of the 1930s. Nasser was Mussolini; the canal was Abyssinia; ­failure to resist would be appeasement all over again. This was nonsense, really: Nasser was nothing like Mussolini. But Eden could not escape the shadow of his own political youth.

This phenomenon – a prime minister’s greatest strength gradually turning into his or her greatest weakness – is remarkably common. Harold Wilson’s nimble cleverness, Jim Callaghan’s cheerful unflappability, Margaret Thatcher’s restless urgency, John Major’s Pooterish normality, Tony Blair’s smooth charm, Gordon Brown’s rugged seriousness: all these things began as refreshing virtues but became big handicaps. So, in that sense, what happened to Chamberlain and Eden was merely an exaggerated version of what happens to every prime minister. Indeed, perhaps it is only pushing it a bit to suggest, echoing Enoch Powell, that all prime ministers, their human flaws inevitably amplified by the stresses of office, eventually end up as failures. In fact, it may not be too strong to suggest that in an age of 24-hour media scrutiny, surging populism and a general obsession with accountability, the very nature of the job invites failure.

***

In Cameron’s case, it would be easy to construct a narrative based on similar lines. Remember, after all, how he won the Tory leadership in the first place. He went into the 2005 party conference behind David Davis, the front-runner, but overhauled him after a smooth, fluent and funny speech, delivered without notes. That image of blithe nonchalance served him well at first, making for a stark contrast with the saturnine intensity and stumbling stiffness of his immediate predecessors, Michael Howard and Iain Duncan Smith. Yet in the end it was Cameron’s self-confidence that really did for him.

Future historians will probably be arguing for years to come whether he really needed to promise an In/Out referendum on the UK’s membership of the EU, as his defenders claim, to protect his flank against Ukip. What is not in doubt is that Cameron believed he could win it. It became a cliché to call him an “essay crisis” prime minister – a gibe that must have seemed meaningless to millions of people who never experienced the weekly rhythms of the Oxford tutorial system. And yet he never really managed to banish the impression of insouciance. The image of chillaxing Dave, the PM so cockily laidback that he left everything until the last minute, may be a caricature, but my guess is that it will stick.

As it happens, I think Cameron deserves more credit than his critics are prepared to give him. I think it would be easy to present him as a latter-day Baldwin – which I mean largely as a compliment. Like Baldwin, he was a rich provincial Tory who posed as an ordinary family man. Like Baldwin, he offered economic austerity during a period of extraordinary international financial turmoil. Like Baldwin, he governed in coalition while relentlessly squeezing the Liberal vote. Like Baldwin, he presented himself as the incarnation of solid, patriotic common sense; like Baldwin, he was cleverer than his critics thought; like Baldwin, he was often guilty of mind-boggling complacency. The difference is that when Baldwin gambled and lost – as when he called a rash general election in 1923 – he managed to save his career from the ruins. When Cameron gambled and lost, it was all over.

Although I voted Remain, I do not share many commentators’ view of Brexit as an apocalyptic disaster. In any case, given that a narrow majority of the electorate got the result it wanted, at least 17 million people presumably view Cameron’s gamble as a great success – for Britain, if not for him. Unfortunately for Cameron, however, most British academics are left-leaning Remainers, and it is they who will write the history books. What ought also to worry Cameron’s defenders – or his shareholders, to use Roy Jenkins’s metaphor – is that both Chamberlain and Eden ended up being defined by their handling of Britain’s foreign policy. There is a curious paradox here, ­because foreign affairs almost never matters at the ballot box. In 1959, barely three years after Suez, the Conservatives cruised to an easy re-election victory; in 2005, just two years after invading Iraq, when the extent of the disaster was already apparent, Blair won a similarly comfortable third term in office. Perhaps foreign affairs matters more to historians than it does to most voters. In any case, the lesson seems to be that, if you want to secure your historical reputation, you can get away with mishandling the economy and lengthening the dole queues, but you simply cannot afford to damage Britain’s international standing.

So, if Brexit does turn into a total disaster, Cameron can expect little quarter. Indeed, while historians have some sympathy for Chamberlain, who was, after all, motivated by a laudable desire to avoid war, and even for Eden, who was a sick and troubled man, they are unlikely to feel similar sympathy for an overconfident prime minister at the height of his powers, who seems to have brought his fate upon himself.

How much of this, I wonder, went through David Cameron’s mind in the small hours of that fateful morning of 24 June, as the results came through and his place in history began to take shape before his horrified eyes? He reportedly likes to read popular history for pleasure; he must occasionally have wondered how he would be remembered. But perhaps it meant less to him than we think. Most people give little thought to how they will be remembered after their death, except by their closest friends and family members. There is something insecure, something desperately needy, about people who dwell on their place in history.

Whatever you think about Cameron, he never struck me as somebody suffering from excessive insecurity. Indeed, his normality was one of the most likeable things about him.

He must have been deeply hurt by his failure. But my guess is that, even as his car rolled away from 10 Downing Street for the last time, his mind was already moving on to other things. Most prime ministers leave office bitter, obsessive and brooding. But, like Stanley Baldwin, Cameron strolled away from the job as calmly as he had strolled into it. It was that fatal insouciance that brought him down. 

Dominic Sandbrook is a historian, broadcaster and columnist for the Daily Mail. His book The Great British Dream Factory will be published in paperback by Penguin on 1 September

Dominic Sandbrook is a historian and author. His books include Never Had It So Good: A History of Britain from Suez to the Beatles and White Heat: A History of Britain in the Swinging Sixties. He writes the What If... column for the New Statesman.

This article first appeared in the 25 August 2016 issue of the New Statesman, Cameron: the legacy of a loser