Bad Pharma: How drug companies mislead doctors and harm patients
Fourth Estate, 448pp, £13.99
Ben Goldacre is angry. Not just about the sharp practice – and sometimes outright fraud – practiced by pharmaceutical companies as they develop and market drugs, but also about how difficult it is to get anyone interested in it. That’s not Goldacre’s fault: Bad Pharma is an engaging, polemic and elegant book, written with the lay reader in mind. It’s just that explaining the myriad ways in which the evidence base of medicine is distorted, and the effect that has on real people, will never fit in a slogan, a headline or a tweet.
In the first chapter, Goldacre sets out his stall. “Drugs are tested by the people who manufacture them, in poorly designed trials, on hoplessly small numbers of weird, unrepresentative patients, and analysed using techniques that are flawed by design, in such a way that they exaggerate the benefits of treatments,” he writes. “When trials throw up results that companies don’t like, they are perfectly entitled to hide them from doctors and patients . . . academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure.”
Goldacre made his name with the Guardian’s “Bad Science” column but it’s been clear for a while that statistics are what really energise him. Most politicians and journalists notoriously find numbers baffling; very clever and influential people get away with epic innumeracy where a slight verbal stumble would be ruthlessly derided. Contrast the sniggering over David Cameron not knowing the translation of “Magna Carta” with the finding from the Royal Statistical Society that 77 per cent of Labour MPs could not correctly answer the question: “If you spin a coin twice, what is the probability of getting two heads?” (It’s 25 per cent, by the way.)
Doctors do at least have some training in appraising evidence but as Goldacre shows, there are so many ways you can skew a clinical trial that it’s unrealistic to expect a GP or consultant to spot any dodgy data. For example, you could recruit patients to your trial who have no other medical conditions or drug prescriptions, making them more likely to get better. You can test a drug against a sugar-pill placebo, instead of the best current competitor. You can stop a trial early if it looks like it’s going well, or prolong it in the hope that the results will even out. You can find a fluke “clump” of encouraging results about one minor symptom and pretend that’s what the trial was going to measure all along.
Running alongside all of these practices – for which the researchers involved must take some responsibility – is the simple fact that the whole architecture of research publication is tilted towards new, exciting and positive results. There is currently no requirement for the results of every trial to be made public, so naturally academics only want to bother when they’ve found something interesting. Journal editors also worry that research which discovers a treatment has no benefit, or replicates a previous study, is boring. This flatters the drugs and helps their manufacturers reap billions from them.
If all this sounds too crunchy, then some of Goldacre’s case studies will remind you why this stuff is important. Take the “Elephant Man” trial in 2006, where six volunteers were left with rotting toes and fingers, unable to breathe without assistance, because they were given a new drug called TGN1412, which had never been tested on humans before. An inquiry into the fiasco found that a medicine with a similar action in the body had been tried on a patient, years before. It had made that person very unwell, but – you guessed it – the research wasn’t published.
On a grander scale, GlaxoSmithKline concealed the fact that one of its anti-depressants, paroxetine, increased the risk of suicide among children. It managed this because the drug was officially only licensed for used in over-18s and because it mixed the safety data for children in with that of adults, diluting the apparent risk.
That case is not all that exceptional. The list of fines given to drug companies is stomach-turning: $1.4bn to Eli Lilly for wrongly promoting a schizophrenia drug; $2.3bn to Pfizer for pushing the painkiller Bextra, and so on. These huge sums explain the grand failure that lies behind the scandal of Bad Pharma: regulation. As in banking, the regulators struggled to operate across different jurisdictions, against multinational companies with far more money than them; as in Hackgate, they have been too cosy with the industry they are supposed to patrol.
But the real strength of Goldacre’s book is that he has answers. If poorly funded and easily swayed regulators can’t police the industry, then make the data available to everyone. Replace bewildering consent forms with shorter ones in plain English. Scrap the endless drug information labels that list every conceivable side effect (from heart attacks to bad breath) with simple checklists that show how common they are.
This is an important book. Ben Goldacre is angry, and by the time you put Bad Pharma down, you should be too.