Science, according to a recent Nature article, is like Battleship. You fire shots into the dark and mostly miss your target. But every missed shot is useful, and after a while you have a clear picture of the landscape and can finally work out where the ships are.
Trying to get your study published, though, is not like Battleship. It's like Trafalgar. You really can’t afford negative results. An analysis by Daniele Fanelli found that an average of only 10 - 15 per cent of results published are negative, and this has been declining by about 6 per cent every year. Science journals just aren't interested in write-ups that don’t find anything. And other scientists don’t tend to cite them.
What to do then, if you turn up a normal proportion of null results (most of them), and you want to make an impact on your field? Well, there are ways of getting the results you want. You could, for example, cherry-pick. In a batch of negative, there is often the odd positive due to statistical fluctuation or chemical impurities. You could simply ignore the rest and report this one.
Some do, and it's a real problem. Journals full of of happy accidents send future researchers off on wild goose chases, or cost them money and time trying to replicate, which they can never quite do. From this comes the famous “decline effect” - findings that become less marked every time an experiment is repeated. And studies that show important dead-ends never see the light of day, meaning those dead-ends get discovered over and over again. Those who wipe experiments from history seem doomed to repeat them.
But of course this is not necessarily a deterrent. Fanelli notes that states where science competition is fiercest produce the largest number of positive results. Send your competitors down dead ends and up garden paths? In the scramble to publish, this only seems like a good idea.
Not everyone does this though. Just a couple of moral points ahead of the cherry-pickers come the judicious plodders. These researchers limit themselves to experiments with predictable outcomes. A sure way to get published, even if it never advances the field.
Then, with all the motivations in place, you get full on fraud. In 2011, a social psychologist called Diederik Stapel was found to have made up data on a huge scale - it turned out he had lied about his results in at least 30 publications. Those who conducted the investigation wrote: “Whereas all these excessively neat findings should have provoked thought, they were embraced ... People accepted, if they even attempted to replicate the results for themselves, that they had failed because they lacked Mr Stapel's skill.”
How to solve the problem? It's difficult, because what is essentially a group endeavour is fraught with competition. You can't change that overnight, but you could tweak the prize: publish more negative results. It’s a big ask - rather like requiring newspapers to report neutral events instead of interesting ones - but a necessary one. Science journals should stop trying to be exciting, and settle for just being right.