Treat with extreme caution

Homoeopathic medicine is founded on a bogus philosophy. Its continued use is a drain on NHS resource

Two years ago, a loose coalition of like-minded scientists wrote an open letter to chief executives of the National Health Service Trusts. The signatories simply stated that homoeopathy and other alternative therapies were unproven, and that the NHS should reserve its funds for treatments that had been shown to work. The letter marked an extraordinary downturn in the fortunes of homoeopathy in the UK over the following year, because the overwhelming majority of trusts either stopped sending patients to the four homoeopathic hospitals, or introduced measures to strictly limit referrals.

Consequently, the future of these hospitals is now in doubt. The Tunbridge Wells Homoeopathic Hospital is set to close next year and the Royal London Homoeopathic Hospital is likely to follow in its wake. Homoeo paths are now so worried about the collapse of their flagship hospitals that they are organising a march to deliver a petition to Downing Street on 22 June. Local campaign groups are being formed and patients are being urged to sign the petition.

Homoeopaths believe that the medical Establishment is crushing a valuable healing tradition that dates back more than two centuries and that still has much to offer patients. Homoeopaths are certainly passionate about the benefits of their treatment, but are their claims valid, or are they misguidedly promoting a bogus philosophy?

This is a question that I have been considering for the past two years, ever since I began co-authoring a book on the subject of alternative medicine with Professor Edzard Ernst. He was one of the signatories of the letter to the NHS trusts and is the world's first professor of complementary medicine. Before I present our conclusion, it is worth remembering why homoeo pathy has always existed beyond the borders of mainstream medicine.

Homoeopathy relies on two key principles, namely that like cures like, and that smaller doses deliver more powerful effects. In other words, if onions cause our eyes to stream, then a homoeopathic pill made from onion juice might be a potential cure for the eye irritation caused by hay fever. Crucially, the onion juice would need to be diluted repeatedly to produce the pill that can be administered to the patient, as homoeopaths believe that less is more.

Initially, this sounds attractive, and not dissimilar to the principle of vaccination, whereby a small amount of virus can be used to protect patients from viral infection. However, doctors use the principle of like cures like very selectively, whereas homoeopaths use it universally. Moreover, a vaccination always contains a measurable amount of active ingredient, whereas homoeopathic remedies are usually so dilute that they contain no active ingredient whatsoever.

A pill that contains no medicine is unlikely to be effective, but millions of patients swear by this treatment. From a scientific point of view, the obvious explanation is that any perceived benefit is purely a result of the placebo effect, because it is well established that any patient who believes in a remedy is likely to experience some improvement in their condition due to the psychological impact. Homoeopaths disagree, and claim that a "memory" of the homoeopathic ingredient has a profound physiological effect on the patient. So the key question is straightforward: is homoeopathy more than just a placebo treatment?

Fortunately, medical researchers have conducted more than 200 clinical trials to investigate the impact of homoeopathy on a whole range of conditions. Typically, one group of patients is given homoeopathic remedies and another group is given a known placebo, such as a sugar pill. Researchers then examine whether or not the homoeopathic group improves on average more than the placebo group. The overall conclusion from all this research is that homoeopathic remedies are indeed mere placebos.

In other words, their benefit is based on nothing more than wishful thinking. The latest and most definitive overview of the evidence was published in the Lancet in 2005 and was accompanied by an editorial entitled "The end of homoeopathy". It argued that ". . . doctors need to be bold and honest with their patients about homoeopathy's lack of benefit".

An unsound investment

However, even if homoeopathy is a placebo treatment, anybody working in health care will readily admit that the placebo effect can be a very powerful force for good. Therefore, it could be argued that homoeopaths should be allowed to flourish as they administer placebos that clearly appeal to patients. Despite the undoubted benefits of the placebo effect, however, there are numerous reasons why it is unjustifiable for the NHS to invest in homoeopathy.

First, it is important to recognise that money spent on homoeopathy means a lack of investment elsewhere in the NHS. It is estimated that the NHS spends £500m annually on alternative therapies, but instead of spending this money on unproven or disproven therapies it could be used to pay for 20,000 more nurses. Another way to appreciate the sum of money involved is to consider the recent refurbishment of the Royal Homoeopathic Hospital in London, which was completed in 2005 and cost £20m. The hospital is part of the University College London Hospitals NHS Foundation Trust, which contributed £10m to the refurbishment, even though it had to admit a deficit of £17.4m at the end of 2005. In other words, most of the overspend could have been avoided if the Trust had not spent so much money on refurbishing the spiritual home of homoeopathy.

Second, the placebo effect is real, but it can lull patients into a false sense of security by improving their sense of well-being without actually treating the underlying conditions. This might be all right for patients suffering from a cold or flu, which should clear up given time, but for more severe illnesses, homoeopathic treatment could lead to severe long-term problems. Because those who administer homoeopathic treatment are outside of conventional medicine and therefore largely unmonitored, it is impos sible to prove the damage caused by placebo. Never theless, there is plenty of anecdotal evidence to support this claim.

For example, in 2003 Professor Ernst was working with homoeopaths who were taking part in a study to see if they could treat asthma. Unknown to the professor or any of the other researchers, one of the homoeopaths had a brown spot on her arm, which was growing in size and changing in colour. Convinced that homoeopathy was genuinely effective, the homoeopath decided to treat it herself using her own remedies. Buoyed by the placebo effect, she continued her treatment for months, but the spot turned out to be a malignant melanoma. While she was still in the middle of treating asthma patients, the homoeopath died. Had she sought conventional treatment at an early stage, there would have been a 90 per cent chance that she would have survived for five years or more. By relying on homoeopathy, she had condemned herself to an inevitably early death.

The third problem is that anybody who is aware of the vast body of research and who still advises homoeopathy is misleading patients. In order to evoke the placebo effect, the patient has to be fooled into believing that homoeopathy is effective. In fact, bigger lies encourage bigger patient expectations and trigger bigger placebo effects, so exploiting the benefits of homoeopathy to the full would require homoeopaths to deliver the most fantastical justifications imaginable.

Over the past half-century, the trend has been towards a more open and honest relationship between doctor and patient, so homoeopaths who mislead patients flagrantly disregard ethical standards. Of course, many homoeopaths may be unaware of or may choose to disregard the vast body of scientific evidence against homoeo pathy, but arrogance and ignorance in health care are also unforgivable sins.

If it is justifiable for the manufacturers of homoeopathic remedies in effect to lie about the efficacy of their useless products in order to evoke a placebo benefit, then maybe the pharmaceutical companies could fairly argue that they ought to be allowed to sell sugar pills at high prices on the basis of the placebo effect as well. This would undermine the requirement for rigorous testing of drugs before they go on sale.

A fourth reason for spurning placebo-based medicines is that patients who use them for relatively mild conditions can later be led into dangerously inappropriate use of the same treatments. Imagine a patient with back pain who is referred to a homoeopath and who receives a moderate, short-term placebo effect. This might impress the patient, who then returns to the homoeopath for other advice. For example, it is known that homoeopaths offer alternatives to conventional vaccination - a 2002 survey of homoeopaths showed that only 3 per cent of them advised parents to give their baby the MMR vaccine. Hence, directing patients towards homoeo paths for back pain could encourage those patients not to have their children vaccinated against potentially dangerous diseases.

Killer cures

Such advice and treatment is irresponsible and dangerous. When I asked a young student to approach homoeopaths for advice on malaria prevention in 2006, ten out of ten homoeopaths were willing to sell their own remedies instead of telling the student to seek out expert advice and take the necessary drugs.

The student had explained that she would be spending ten weeks in West Africa; we had decided on this backstory because this region has the deadliest strain of malaria, which can kill within three days. Nevertheless, homoeopaths were willing to sell remedies that contained no active ingredient. Apparently, it was the memory of the ingredient that would protect the student, or, as one homoeopath put it: "The remedies should lower your susceptibility; because what they do is they make it so your energy - your living energy - doesn't have a kind of malaria-shaped hole in it. The malarial mosquitoes won't come along and fill that in. The remedies sort it out."

The homoeopathic industry likes to present itself as a caring, patient-centred alternative to conventional medicine, but in truth it offers disproven remedies and often makes scandalous and reckless claims. On World Aids Day 2007, the Society of Homoeopaths, which represents professional homoeopaths in the UK, organised an HIV/Aids symposium that promoted the outlandish ambitions of several speakers. For example, describing Harry van der Zee, editor of the International Journal for Classical Homoeo pathy, the society wrote: "Harry believes that, using the PC1 remedy, the Aids epidemic can be called to a halt, and that homoeopaths are the ones to do it."

There is one final reason for rejecting placebo-based medicines, perhaps the most important of all, which is that we do not actually need placebos to benefit from the placebo effect. A patient receiving proven treatments already receives the placebo effect, so to offer homoeopathy instead - which delivers only the placebo effect - would simply short-change the patient.

I do not expect that practising homoeopaths will accept any of my arguments above, because they are based on scientific evidence showing that homoeopathy is nothing more than a placebo. Even though this evidence is now indisputable, homoeopaths have, understandably, not shown any enthusiasm to acknowledge it.

For now, their campaign continues. Although it has not been updated for a while, the campaign website currently states that its petition has received only 382 signatures on paper, which means that there's a long way to go to reach the target of 250,000. But, of course, one of the central principles of homoeopathy is that less is more. Hence, in this case, a very small number of signatures may prove to be very effective. In fact, perhaps the Society of Homoeopaths should urge people to withdraw their names from the list, so that nobody at all signs the petition. Surely this would make it incredibly powerful and guaranteed to be effective.

"Trick or Treatment? Alternative Medicine on Trial" (Bantam Press, £16.99) by Simon Singh and Edzard Ernst is published on 21 April

Homoeopathy by numbers

3,000 registered homoeopaths in the UK

1 in 3 British people use alternative therapies such as homoeopathy

42% of GPs refer patients to homoeopaths

0 molecules of an active ingredient in a typical "30c" homoeopathic solution

$1m reward offered by James Randi for proof that homoeopathy works

This article first appeared in the 21 April 2008 issue of the New Statesman, Food crisis

Show Hide image

Why don’t we learn from our mistakes – even when it matters most?

Juan Rivera served ten years of prison time until DNA evidence overturned his sentence. But even now, some maintain his guilt.

On the afternoon of 17 August 1992, an 11-year-old girl called Holly Staker walked from her home to a neighbour’s apartment in Waukegan, a small town in Illinois. She had been asked to babysit two children, a girl aged two and a boy of five. By 8pm Holly was dead. An unidentified intruder had broken into the apartment, violently raped her, and stabbed her 27 times. The local police force pursued 600 leads and interviewed 200 people, but within a few weeks the trail had run cold.

Then, through the testimony of a jailhouse informant, police happened upon a new suspect: Juan Rivera, a 19-year-old man who lived a few miles south of the murder scene. Over four days, Rivera, who had a history of psychological problems, was subjected to a gruelling examination by the Lake County Task Force. At one point it seemed to get too much for Rivera. Officers saw him pulling out a clump of hair and banging his head on the wall.

On the third day, as the interviewers’ tone became aggressive, Rivera finally nodded his head when asked if he had committed the crime. By this time, his hands and feet were tied together, and he was confined to a padded cell. On the basis of his confession, police prepared a statement for Rivera to sign.

There were no witnesses to the crime. There was no physical evidence. But there was a brutally murdered young girl, a community still in mourning, and that signed confession. The jury didn’t take long to make up their minds. Rivera was convicted of first-degree murder and sentenced to life in prison.

Many journalists who covered the case were uneasy. They could see that the case hinged on the confession of a disturbed young man, who had retracted hours after signing it. But the police and prosecutors felt vindicated. It had been a troubling crime. A man had been sentenced. Holly’s family could find closure.

Or could they?


Eight years earlier, in 1984, Alec Jeffreys, a British scientist, had the insight that would lead to DNA fingerprinting – a technique that would transform the criminal justice system. In the absence of contamination, and provided the test is administered correctly, the odds of two unrelated people having matching DNA is roughly one in a billion. In certain cases, such as rape, it would in time be possible to identify the attacker conclusively, based on DNA extracted from the crime scene. After all, if the police swabbed the sperm found in the victim, they could narrow the number of potential suspects to just one. This is why DNA fingerprinting has helped to secure thousands of convictions – it has a unique power in establishing guilt.

But it also has implications for cases that have already been tried: the power to exonerate. If the DNA from the sperm in a rape victim has been stored, and if it does not match that of the person serving time in prison, the conclusion is difficult to deny: it came from a different man, the real criminal.

In August 1989, just months after DNA fingerprinting technology became advanced enough for use in the criminal justice system, Gary Dotson, who had been convicted of a 1977 rape in Chicago, was released from jail, having consistently proclaimed his innocence. Underwear worn by the victim had been sent for DNA testing, which showed that the semen belonged to a different man. Dotson had served more than ten years in jail.

The first DNA exoneration in the UK involved Michael Shirley, a young sailor who had been convicted of the rape and murder of Linda Cook, a barmaid working in Portsmouth, in December 1986. A simple DNA test performed in 2001, after police finally admitted they had stored swabs from the crime, proved that the semen found in the victim did not belong to Shirley. He had served 16 years at the time of his release.

By 2005, in the United States alone, the convictions of more than 300 people had been overturned following DNA tests. It seemed that the criminal justice system was getting it wrong repeatedly. In situations where evidence had been kept, clients of the Innocence Project, an American charity that helps prisoners who say they were wrongfully convicted, were exonerated in almost half the cases. A study led by Samuel R Gross, a professor at the University of Michigan Law School, concluded: “If we reviewed prison sentences with the same level of care that we devote to death sentences, there would have been over 28,500 non-death-row exonerations [in the US] in the past 15 years rather than the 255 that have in fact occurred.”

At that point, Juan Rivera, the man convicted of killing Holly Staker, had been in jail for almost 13 years. His lawyers decided to apply for a DNA test. It hadn’t been performed at the trial because techniques were not, at that time, sufficiently advanced to analyse very small samples of tissue. When the results came back they were conclusive: the semen found inside Holly Staker did not belong to Rivera.

He was yet another victim of wrongful conviction.


When a system produces an error, it is a tragedy for the person on the wrong side of the mistake. But it is also a precious learning opportunity, a chance to figure out what went wrong, and make reforms to the system to ensure that the same mistakes don’t happen again.

Aviation is a powerful model in this respect. Every plane is equipped with two black boxes, which record vital information. If an accident occurs, these boxes are recovered, the failure analysed, and reforms implemented. Pilots report near-miss events, too, which are statistically analysed in order to help prevent accidents.

In 1912, eight of the 14 qualified US army pilots died in crashes. In 2014, the accident rate for major airlines had dropped to just one crash for every 8.3 million take-offs. That’s the power of learning from mistakes.

Broadly speaking, this is how science works. Scientists make testable predictions; when these fail, the theories are reformed. As the philosopher Karl Popper put it: “The history of science . . . is a history . . . of error. But science is one of the very few human activities – perhaps the only one – in which errors are systematically criticised and fairly often, in time, corrected. This is why . . . we can speak clearly and sensibly about making progress there.”

But there is a practical problem with adopting this model of progress. Many people don’t like to admit to their mistakes. They do not meet their failures with intellectual honesty and a willingness to understand what went wrong, but with back covering and defensiveness. This has a simple consequence: if mistakes are not learned from, they will be made again and again. And again.

In his famous studies of “cognitive dissonance” in the 1950s, the sociologist Leon Festinger discovered the array of tactics used by people to spin, reframe and cover up their mistakes. A classic example involved a small group of cult members. They had left their families to live with a housewife called “Marian Keech” (real name: Dorothy Martin), who had predicted the world would end before dawn on 21 December 1954. She had also prophesied that the members of the cult would be saved from the looming apocalypse by a spaceship that would land at Keech’s small house in suburban Minnesota at midnight.

Neither the apocalypse nor the spaceship materialised (Keech’s husband, who was a non-believer, had gone to his bedroom and slept through the whole thing). This was an unambiguous failure. Keech had said the world would end, and that a spaceship would save true believers. Neither had happened. The cult members could have responded by altering their beliefs about Keech’s supernatural insights.

Instead, they altered the “evidence”. At first, the cult members kept checking outside to see if the spaceship had landed. Then, as the clock ticked past midnight, they became sullen and bemused. Ultimately, however, they became defiant. Just as Festinger had predicted before he infiltrated the cult, the faith of hardcore members was unaffected by what should have been a crushing disappointment. In fact, their faith seemed to grow stronger.

How is this possible? As Festinger recounted in his book When Prophecy Fails, first published in 1956, they simply redefined the failure. “The godlike figure is so impressed with our faith that he has decided to give the planet a second chance,” they proclaimed (I am paraphrasing only a little). “We saved the world!” Far from abandoning the cult, members went out on a recruitment drive. As Festinger put it: “The little group, sitting all night long, had spread so much light that God had saved the world from destruction.” They were “jubilant”.

This may sound bizarre, but it is, in fact, commonplace. Festinger showed that this behaviour, though seemingly extreme, provides an insight into psychological mechanisms that are universal. When we are confronted with evidence that challenges our deeply held beliefs we are more likely to reframe the evidence than we are to alter our beliefs. We simply invent new reasons, new justifications, new explanations. Sometimes we ignore the evidence altogether.

Think, for example, of health care. In her book After Harm, published in 2005, the American researcher Nancy Berlinger investigated how doctors reframe their mistakes. “Observing more senior physicians, students learn that their mentors and supervisors believe in, practise and reward the concealment of errors,” she wrote. “They learn how to talk about unanticipated outcomes until a ‘mistake’ morphs into a ‘complication’. Above all, they learn not to tell the patient anything.”

This is partly about fear of litigation, but also about the protecting ego. Think of the language associated with senior physicians. Surgeons work in a “theatre”. This is the stage where they “perform”. How dare they fluff their lines? As the safety expert James Reason put it: “After a long education, you are expected to get it right. The consequence is that medical errors are marginalised and stigmatised.”

But if doctors do not learn from their mistakes, they are destined to repeat them. A report by the UK National Audit Office in 2005 estimated that up to 34,000 people are killed each year as a result of human error in the health-care system. It put the overall number of patient incidents (fatal and non-fatal) at 974,000. In the US, annually, 400,000 people die because of preventable error in hospitals alone. That is the equivalent of two jumbo jets crashing every day.

Another example is economics. In 2010, a group of influential thinkers, including Michael J Boskin, a former chairman of the US president’s Council of Economic Advisers, and the historian Niall Ferguson, wrote an open letter to the chairman of the Federal Reserve. The signatories were worried that a second tranche of quantitative easing would “risk currency debasement and inflation” and “distort financial markets”. The letter was published in the Wall Street Journal and New York Times.

At the time, inflation was running at 1.5 per cent, but what happened in the aftermath? Did inflation soar? In fact, by December 2014, inflation had not merely stayed at historically low levels, but had fallen to 0.8 per cent (soon after that, it fell into negative territory). The US economy was creating jobs at a fast rate, too, while unemployment had dropped and companies were reporting record profits.

This was a stark error in prediction, an opportunity to revise theoretical assumptions (just as mistakes in medicine provide an opportunity to revise procedures). But the signatories saw it differently. David Malpass, a deputy assistant secretary to the Treasury under Ronald Reagan, said “the letter was correct as stated”. John Taylor, an economics professor, said: “The letter mentioned many things – the risk of inflation . . . to employment . . . – and all have happened.”

Again, like revered clinicians, these prestigious thinkers could not bear that they had got things wrong. So, instead of learning from their mistake, they spun it. Some of the signatories came up with detailed graphs and tables to show that they were right all along.

Consider that if inflation had soared, they would have claimed this as a vindication for their prediction (as the cult members would have said the apocalypse had materialised). Yet these eminent thinkers also claimed vindication when inflation fell (just like the cult members when the spaceship didn’t turn up). Heads I win; tails I don’t lose.

And this is why it is the social pundits with the most prestigious reputations – as measured by how often they tour the television studios – who often make the worst predictions. They have the most to lose from mistakes and are therefore the most likely to come up with ex post explanations for why they were right all along. And because these pundits are clever, their rationalisations sound plausible.

But it is also why they don’t learn from their mistakes. After all, if you can’t admit when you are wrong, how can you ever put things right?


This brings us back to the criminal justice system. Many of these DNA cases show unequivocal failures. They demonstrate almost beyond doubt that mistakes were made by the police, prosecution and courts. This should have led to an acknowledgement of error, and meaningful reform. In fact, the astonishing thing is not that the system – or at least the people behind it – learned, but the extent to which it continued to evade and deny.

When the DNA results came back for Juan Rivera, state prosecutors offered a new story to account for the evidence – a very different story from the one that they had presented at the original trial. Holly, an 11-year-old child, had had consensual sex with a lover a few hours before the attack, prosecutors claimed. This accounted for the semen. And Rivera? He had happened upon Holly after intercourse had taken place, and murdered her.

“It was a grotesque way of squaring the new evidence with their unshakeable belief that Rivera was guilty,” Steven Art, one of the jailed man’s lawyers, told me. “But it was also totally inconsistent with the overwhelming evidence that Holly had been raped, quite brutally. There were signs of vaginal and anal trauma and stab wounds in her genitals.”

The prosecutor’s new story may have seemed outlandish, but the consequences were very real. Rivera was not released from prison for another six years. “I got stabbed twice and endured three attempted rapes,” he told me during a phone interview last year. “People wanted to hurt me; they thought that I was a child rapist. But perhaps the toughest thing of all was knowing that I was innocent.”

Rivera’s experience was far from unusual. The vast majority of exonerations through DNA evidence were met with breathtaking obstruction. Nothing seemed to budge prosecutors from their conviction that the man who had been sent to prison was guilty. Even after the test had been performed. Even after the conviction had been overturned. Even after the prisoner had been released from jail.

The problem was not the strength of the evidence, which was often overwhelming; it was the psychological difficulty in accepting it.

The process often took a distinctive path. First the prosecutors would try to deny access to DNA evidence. When that strategy was batted away, and the test excluded the convict as the source of the DNA, they would claim that it had not been carried out correctly. This didn’t last long, either, because when the test was redone it would invariably come back with the same result. The next stage in rape cases was for the prosecutor to argue that the semen belonged to a different man who was not the murderer. In other words, the victim had consensual sex with another man, but was subsequently raped by the prisoner, who used a condom.

The presence of an entirely new man, not mentioned at the initial trial, for whom there were no eyewitnesses, and who the victim often couldn’t remember having sex with, may seem like a desperate ploy to evade the evidence. But it has been used so often that it has been given a name by defence lawyers: “the unindicted co-ejaculator”.

In an adversarial system you would expect any new evidence secured by the defence to be looked at with healthy scepticism by prosecutors. You would expect them to give it scrutiny and to look at the wider context to be sure it stacks up. But in case after case, the sense of denial from prosecutors and police went a lot further.

And this explains, in turn, why the system was making so many mistakes. Proper investigation into wrongful convictions by the Innocence Project in recent years has discovered profound weaknesses in forensic science, police detection methods, eyewitness testimony and more. If these investigations had taken place earlier, and reforms been introduced, untold suffering could have been averted. But these opportunities were missed repeatedly, on both sides of the Atlantic.

This is why legal campaigners argue that the most robust way to improve the system, so as to reduce wrongful convictions and simultaneously increase rightful convictions, is to establish innocence commissions. These are independent bodies, mandated to investigate wrongful convictions and to recommend reforms, along the lines of air accident investigation teams. As of publication, only 11 states in the US had such commissions.

In the UK, a reform commission of sorts was set up in 1995 after a series of spectacular miscarriages of justice, including the Birmingham Six and the Guildford Four. The Criminal Cases Review Commission, an independent body, has the authority to refer questionable verdicts to the Court of Appeal. Between 1997 and the end of October 2013 it referred 538 cases in total.

Of these, 70 per cent succeeded at appeal.


More than 20 years after Juan Rivera was sentenced, a DNA test was conducted on a bloodstained piece of timber that had been used in a different murder. In January 2000, a man called Delwin Foxworth, who also lived in Lake County, was beaten savagely with a plank, doused with gasoline and set on fire. He died of his injuries, having suffered burns over 80 per cent of his body.

The DNA of the blood found on the wood matched that of the semen found in Holly Staker. Police are now almost certain that the man who got away with the rape and murder of an 11-year-old girl in 1992 went on to commit another murder eight years later. Foxworth may be yet another victim of the wrongful conviction of Juan Rivera; it allowed the real culprit to get away with it, and kill again. (The murderer has never been found.)

But what about those who were responsible for sending Rivera to jail? How do they feel about it today? Perhaps it should come as no surprise that even now many remain convinced of his guilt. In October 2014, Charles Fagan, an investigator who helped obtain Rivera’s confessions, was asked if he still believed that he committed the murder. “I think so,” he told a local newspaper.

And what of the prosecutors? Even after Rivera was released, Lake County lawyers wanted to put him back on trial. Only with a further conviction would they be able to say that they had been right all along. Only with a conviction could they quell their dissonance. Rivera walking around free was like an accusation against their competence.

It was left to the Illinois Appellate Court to take what might otherwise seem to be an astonishing step: it barred Lake County from ever prosecuting Juan Rivera for the murder of Holly Staker again.

Matthew Syed’s latest book is “Black Box Thinking: the Surprising Truth About Success” (John Murray)

This article first appeared in the 12 November 2015 issue of the New Statesman, Isis and the threat to Britain