Support 100 years of independent journalism.

28 November 2013

Scientific journals should stop trying to be exciting – and focus on being right

Scientists desperate to have an "impact" in their field are cherry-picking and misrepresenting their results. It's the natural result of a desperate scramble to publish.

By Martha Gill

Science, according to a recent Nature article, is like Battleship. You fire shots into the dark and mostly miss your target. But every missed shot is useful, and after a while you have a clear picture of the landscape and can finally work out where the ships are.

Trying to get your study published, though, is not like Battleship. It’s like Trafalgar. You really can’t afford negative results. An analysis by Daniele Fanelli found that an average of only 10 – 15 per cent of results published are negative, and this has been declining by about 6 per cent every year. Science journals just aren’t interested in write-ups that don’t find anything. And other scientists don’t tend to cite them.

What to do then, if you turn up a normal proportion of null results (most of them), and you want to make an impact on your field? Well, there are ways of getting the results you want. You could, for example, cherry-pick. In a batch of negative, there is often the odd positive due to statistical fluctuation or chemical impurities. You could simply ignore the rest and report this one.

Some do, and it’s a real problem. Journals full of of happy accidents send future researchers off on wild goose chases, or cost them money and time trying to replicate, which they can never quite do. From this comes the famous “decline effect” – findings that become less marked every time an experiment is repeated. And studies that show important dead-ends never see the light of day, meaning those dead-ends get discovered over and over again. Those who wipe experiments from history seem doomed to repeat them.

But of course this is not necessarily a deterrent. Fanelli notes that states where science competition is fiercest produce the largest number of positive results. Send your competitors down dead ends and up garden paths? In the scramble to publish, this only seems like a good idea.

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s global affairs newsletter, every Monday and Friday. A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A weekly dig into the New Statesman’s archive of over 100 years of stellar and influential journalism, sent each Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.
I consent to New Statesman Media Group collecting my details provided via this form in accordance with the Privacy Policy

Not everyone does this though. Just a couple of moral points ahead of the cherry-pickers come the judicious plodders. These researchers limit themselves to experiments with predictable outcomes. A sure way to get published, even if it never advances the field.

Then, with all the motivations in place, you get full on fraud. In 2011, a social psychologist called Diederik Stapel was found to have made up data on a huge scale – it turned out he had lied about his results in at least 30 publications. Those who conducted the investigation wrote: “Whereas all these excessively neat findings should have provoked thought, they were embraced … People accepted, if they even attempted to replicate the results for themselves, that they had failed because they lacked Mr Stapel’s skill.”

How to solve the problem? It’s difficult, because what is essentially a group endeavour is fraught with competition. You can’t change that overnight, but you could tweak the prize: publish more negative results. It’s a big ask – rather like requiring newspapers to report neutral events instead of interesting ones – but a necessary one. Science journals should stop trying to be exciting, and settle for just being right.