We need to stop worrying and trust our robot researchers

The work of Francis Crick and James Watson gives us a vision of what's to come.

It’s now 60 years since the publication of the structure of DNA. As we celebrate the past, the work of Francis Crick and James Watson also gives us a vision of what’s to come. Their paper was not subjected to peer review, today’s gold standard for the validation of scientific research. Instead, it was discussed briefly over a lunch at the Athenaeum Club. In an editorial celebrating the anniversary, the journal Nature, which originally published the research, points out that this is “unthinkable now”.

However, peer review has always been somewhat patchy and it is becoming ever more difficult. This is the age of “big data”, in which scientists make their claims based on analysis of enormous amounts of information, often carried out by custom-written software. The peer review process, done on an unpaid, voluntary basis in researchers’ spare time, doesn’t have the capacity to go through all the data-analysis techniques. Reviewers have to rely on their intuition.

There are many instances of this leading science up the garden path but recently we were treated to a spectacular example in economics. In 2010, Harvard professors published what quickly became one of the most cited papers of the year. Simply put, it said that if your gross public debt is more than 90 per cent of your national income, you are going to struggle to achieve any economic growth.

Dozens of newspapers quoted the research, the Republican Party built its budget proposal on it and no small number of national leaders used it to justify their preferred policies. Which makes it all the more depressing that it has been unmasked as completely wrong.

The problem lay in poor data-handling. The researchers left out certain data points, gave questionable weight to parts of the data set and – most shocking of all – made a mistake in the programming of their Excel spreadsheet.

The Harvard paper was not peer-reviewed before publication. It was only when the researchers shared software and raw data with peers sceptical of the research that the errors came to light.

The era of big data in science will stand or fall on such openness and collaboration. It used to be that collaboration arose from the need to create data. Crick and Watson collaborated with Maurice Wilkins to gather the data they needed – from Rosalind Franklin’s desk drawer, without her knowledge or permission. That was what gave them their pivotal insight. However, as Mark R Abbott of Oregon State University puts it, “We are no longer data-limited but insight-limited.”

Gaining insights from the data flood will require a different kind of science from Crick’s and Watson’s and it may turn out to be one to which computers and laboratorybased robots are better suited than human beings. In another 60 years, we may well be looking back at an era when silicon scientists made the most significant discoveries.

A robot working in a lab at Aberystwyth University made the first useful computergenerated scientific contribution in 2009, in the field of yeast genomics. It came up with a hypothesis, performed experiments and reached a conclusion, then had its work published in the journal Science. Since then, computers have made further inroads. So far, most (not all) have been checked by human beings but that won’t be possible for long. Eventually, we’ll be taking their insights on trust and intuition stretched almost to breaking point – just as we did with Crick and Watson.

President Obama inspects a robot built in Virginia. Photograph: Getty Images.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

Getty
Show Hide image

Did your personality determine whether you voted for Brexit? Research suggests so

The Online Privacy Foundation found Leave voters were significantly more likely to be authoritarian and conscientious. 

"Before referendum day, I said the winners would be those who told the most convincing lies," Paul Flynn, a Labour MP, wrote in these pages. "Leave did." The idea that those who voted for Brexit were somehow manipulated is widely accepted by the Remain camp. The Leave campaign, so the argument goes, played on voters' fears and exploited their low numeracy. And new research from the Online Privacy Foundation suggests this argument may, in part at least, be right. 

Over the last 18 months the organisation have researched differences in personality traits, levels of authoritarianism, numeracy, thinking styles and cognitive biases between EU referendum voters. The organisation conducted a series of studies, capturing over 11,000 responses to self-report psychology questionnaires and controlled experiments, with the final results scheduled to be presented at the International Conference on Political Psychology in Copenhagen in October 2017.

The researchers questioned voters using the "Five Factor Model" which consists of five broad personality traits - Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. They also considered the disposition of authoritarianism (it is not considered a personality trait). Authoritarians have a more black and white view of the world around them, are more concerned with the upkeep of established societal traditions and have a tendency to be less accepting of outsiders. 

So what did they uncover? Participants expressing an intent to vote to leave the EU reported significantly higher levels of authoritarianism and conscientiousness, and lower levels of openness and neuroticism than voters expressing an intent to vote to remain. (Conscientiousness is associated with dependability, dutifulness, focus and adherence to societal norms in contrast to disorganisation, carelessness and impulsivity.)

Immigration in particular seems to have affected voting. While authoritarians were much more likely to vote Leave to begin with, those who were less authoritarian became increasingly likely to vote Leave if they expressed high levels of concern over immigration. These findings chime with research by the Professors Marc Hetherington and Elizabeth Suhay, which found that Americans became susceptible to "authoritarian thinking" when they perceived a grave threat to their safety. 

Then there's what you might call the £350m question - did Leave voters know what they were voting for? When the Online Privacy Foundation researchers compared Leave voters with Remain voters, they displayed significantly lower levels of numeracy, reasoning and appeared more impulsive. In all three areas, older voters performed significantly worse than young voters intending to vote the same way.

Even when voters were able to interpret statistics, their ability to do so could be overcome by partisanship. In one striking study, when voters were asked to interpret statistics about whether a skin cream increases or decreases a rash, they were able to interpret them correctly roughly 57 per cent of the time. But when voters were asked to interpret the same set of statistics, but told they were about whether immigration increases or decreases crime, something disturbing happened. 

If the statistics didn't support a voter's view, their ability to correctly interpret the numbers dropped, in some cases, by almost a half. 

Before Remoaners start to crow, this study is not an affirmation that "I'm smart, you're dumb". Further research could be done, for example, on the role of age and education (young graduates were far more likely to vote Remain). But in the meantime, there is a question that needs to be answered - are political campaigners deliberately exploiting these personality traits? 

Chris Sumner, from the Online Privacy Foundation, warns that in the era of Big Data, clues about our personalities are collected online: "In the era of Big Data, these clues are aggregated, transformed and sold by a burgeoning industry."

Indeed, Cambridge Analytica, a data company associated with the political right in the UK and US, states on its website that it can "more effectively engage and persuade voters using specially tailored language and visual ad combinations crafted with insights gleaned from behavioral understandings of your electorate". It will do so through a "blend of big data analytics and behavioural psychology". 

"Given the differences observed between Leave and Remain voters, and irrespective of which campaign, it is reasonable to hypothesize that industrial-scale psychographic profiling would have been a highly effective strategy," Sumner says. By identifying voters with different personalities and attitudes, such campaigns could target "the most persuadable voters with messages most likely to influence their vote". Indeed, in research yet to be published, the Online Privacy Foundation targeted groups with differing attitudes to civil liberties based on psychographic indicators associated with authoritarianism. The findings, says Sumner, illustrate "the ease with which individuals' inherent differences could be exploited". 

Julia Rampen is the digital news editor of the New Statesman (previously editor of The Staggers, The New Statesman's online rolling politics blog). She has also been deputy editor at Mirror Money Online and has worked as a financial journalist for several trade magazines. 

0800 7318496