We need to stop worrying and trust our robot researchers

The work of Francis Crick and James Watson gives us a vision of what's to come.

It’s now 60 years since the publication of the structure of DNA. As we celebrate the past, the work of Francis Crick and James Watson also gives us a vision of what’s to come. Their paper was not subjected to peer review, today’s gold standard for the validation of scientific research. Instead, it was discussed briefly over a lunch at the Athenaeum Club. In an editorial celebrating the anniversary, the journal Nature, which originally published the research, points out that this is “unthinkable now”.

However, peer review has always been somewhat patchy and it is becoming ever more difficult. This is the age of “big data”, in which scientists make their claims based on analysis of enormous amounts of information, often carried out by custom-written software. The peer review process, done on an unpaid, voluntary basis in researchers’ spare time, doesn’t have the capacity to go through all the data-analysis techniques. Reviewers have to rely on their intuition.

There are many instances of this leading science up the garden path but recently we were treated to a spectacular example in economics. In 2010, Harvard professors published what quickly became one of the most cited papers of the year. Simply put, it said that if your gross public debt is more than 90 per cent of your national income, you are going to struggle to achieve any economic growth.

Dozens of newspapers quoted the research, the Republican Party built its budget proposal on it and no small number of national leaders used it to justify their preferred policies. Which makes it all the more depressing that it has been unmasked as completely wrong.

The problem lay in poor data-handling. The researchers left out certain data points, gave questionable weight to parts of the data set and – most shocking of all – made a mistake in the programming of their Excel spreadsheet.

The Harvard paper was not peer-reviewed before publication. It was only when the researchers shared software and raw data with peers sceptical of the research that the errors came to light.

The era of big data in science will stand or fall on such openness and collaboration. It used to be that collaboration arose from the need to create data. Crick and Watson collaborated with Maurice Wilkins to gather the data they needed – from Rosalind Franklin’s desk drawer, without her knowledge or permission. That was what gave them their pivotal insight. However, as Mark R Abbott of Oregon State University puts it, “We are no longer data-limited but insight-limited.”

Gaining insights from the data flood will require a different kind of science from Crick’s and Watson’s and it may turn out to be one to which computers and laboratorybased robots are better suited than human beings. In another 60 years, we may well be looking back at an era when silicon scientists made the most significant discoveries.

A robot working in a lab at Aberystwyth University made the first useful computergenerated scientific contribution in 2009, in the field of yeast genomics. It came up with a hypothesis, performed experiments and reached a conclusion, then had its work published in the journal Science. Since then, computers have made further inroads. So far, most (not all) have been checked by human beings but that won’t be possible for long. Eventually, we’ll be taking their insights on trust and intuition stretched almost to breaking point – just as we did with Crick and Watson.

President Obama inspects a robot built in Virginia. Photograph: Getty Images.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

exseada/DeviantArt
Show Hide image

Why Twitter is dying, in ten tweets

It's ironic that the most heated discussions of the platform's weaknesses are playing out on the platform itself. 

Twitter has been dying since 2009, and commentators have pre-emptively declared it deceased pretty much every year since. To declare that it's on the downturn has become a bit of a cliché. But that doesn't mean that it isn't also, well, true.

Grumbling among users and commentators has grown to a roar over the past few days, thanks in part to a Buzzfeed report (refuted by Jack Dorsey, Twitter's CEO) claiming the service will move away from a chronological timeline and towards an algorithmic one. Users coined the hashtag #RIPTwitter in response, and, tellingly, many of their complaints spanned beyond the apparently erroneous report. 

They join a clutch of other murmurings, bits of data and suggestions that things are not as they should be in the Twitter aviary. 

Below is one response to the threat of the new timeline, aptly showing that for lots of users, the new feed would have been the straw that broke the tweeters' backs:

Twitter first announced it was considering a new 10,000 character limit in January, but it's yet to be introduced. Reactions so far indicate that no one thinks this is a good idea, as the 140 character limit is so central to Twitter's unique appeal. Other, smaller tweaks – like an edit button – would probably sit much more easily within Twitter's current stable of features, and actually improve user experience: 

While Dorsey completely denied that the change would take place, he then followed up with an ominous suggestion that something would be changing:

"It'll be more real-time than a feed playing out in real time!" probably isn't going to placate users who think the existing feed works just fine. It may be hard to make youself heard on the current timeline, but any kind of wizardry that's going to decide what's "timely" or "live" for you is surely going to discriminate against already alienated users.

I've written before about the common complaint that Twitter is lonely for those with smaller networks. Take this man, who predicts that he'll be even more invisible in Twitter's maelstrom if an algorithm deems him irrelevant: 

What's particularly troubling about Twitter's recent actions is the growing sense that it doesn't "get" its users. This was all but confirmed by a recent string of tweets from Brandon Carpenter, a Twitter employee who tweeted this in response to speculation about new features:

...and then was surprised and shocked when he received abuse from other accounts:

This is particularly ironic because Twitter's approach (or non-approach) to troll accounts and online abusers has made it a target for protest and satire (though last year it did begin to tackle the problem). @TrustySupport, a spoof account, earned hundreds of retweets by mocking Twitter's response to abuse:

Meanwhile, users like Milo Yiannopolous, who regularly incites his followers to abuse and troll individuals (often women and trans people, and most famously as part of G*merg*te), has thrived on Twitter's model and currently enjoys the attentions of almost 160,000 followers. He has boasted about the fact that Twitter could monetise his account to pull itself out of its current financial trough:

The proof of any social media empire's decline, though, is in its number and activity of users. Earlier this month, Business Insider reported that, based on a sample of tweets, tweets per user had fallen by almost 50 per cent since last August. Here's the reporter's tweet about it:

Interestingly, numbers of new users remained roughly the same – which implies not that Twitter can't get new customers, but that it can't keep its current ones engaged and tweeting. 

Most tellingly of all, Twitter has stopped reporting these kinds of numbers publicly, which is why Jim Edwards had to rely on data taken from an API. Another publication followed up Edwards' story with reports that users aren't on the platform enough to generate ad revenue:

The missing piece of the puzzle, and perhaps the one thing keeping Twitter alive, is that its replacement hasn't (yet) surfaced. Commentators obsessed with its declining fortunes still take to Twitter to discuss them, or to share their articles claiming the platform is already dead. It's ironic that the most heated discussions of the platform's weaknesses are playing out on the platform itself. 

For all its faults, and for all they might multiply, Twitter's one advantage is that there's currently no other totally open platform where people can throw their thoughts around in plain, public view. Its greatest threat yet will come not from a new, dodgy feature, but from a new platform – one that can actually compete with it.

Barbara Speed is a technology and digital culture writer at the New Statesman and a staff writer at CityMetric.