We need to stop worrying and trust our robot researchers

The work of Francis Crick and James Watson gives us a vision of what's to come.

It’s now 60 years since the publication of the structure of DNA. As we celebrate the past, the work of Francis Crick and James Watson also gives us a vision of what’s to come. Their paper was not subjected to peer review, today’s gold standard for the validation of scientific research. Instead, it was discussed briefly over a lunch at the Athenaeum Club. In an editorial celebrating the anniversary, the journal Nature, which originally published the research, points out that this is “unthinkable now”.

However, peer review has always been somewhat patchy and it is becoming ever more difficult. This is the age of “big data”, in which scientists make their claims based on analysis of enormous amounts of information, often carried out by custom-written software. The peer review process, done on an unpaid, voluntary basis in researchers’ spare time, doesn’t have the capacity to go through all the data-analysis techniques. Reviewers have to rely on their intuition.

There are many instances of this leading science up the garden path but recently we were treated to a spectacular example in economics. In 2010, Harvard professors published what quickly became one of the most cited papers of the year. Simply put, it said that if your gross public debt is more than 90 per cent of your national income, you are going to struggle to achieve any economic growth.

Dozens of newspapers quoted the research, the Republican Party built its budget proposal on it and no small number of national leaders used it to justify their preferred policies. Which makes it all the more depressing that it has been unmasked as completely wrong.

The problem lay in poor data-handling. The researchers left out certain data points, gave questionable weight to parts of the data set and – most shocking of all – made a mistake in the programming of their Excel spreadsheet.

The Harvard paper was not peer-reviewed before publication. It was only when the researchers shared software and raw data with peers sceptical of the research that the errors came to light.

The era of big data in science will stand or fall on such openness and collaboration. It used to be that collaboration arose from the need to create data. Crick and Watson collaborated with Maurice Wilkins to gather the data they needed – from Rosalind Franklin’s desk drawer, without her knowledge or permission. That was what gave them their pivotal insight. However, as Mark R Abbott of Oregon State University puts it, “We are no longer data-limited but insight-limited.”

Gaining insights from the data flood will require a different kind of science from Crick’s and Watson’s and it may turn out to be one to which computers and laboratorybased robots are better suited than human beings. In another 60 years, we may well be looking back at an era when silicon scientists made the most significant discoveries.

A robot working in a lab at Aberystwyth University made the first useful computergenerated scientific contribution in 2009, in the field of yeast genomics. It came up with a hypothesis, performed experiments and reached a conclusion, then had its work published in the journal Science. Since then, computers have made further inroads. So far, most (not all) have been checked by human beings but that won’t be possible for long. Eventually, we’ll be taking their insights on trust and intuition stretched almost to breaking point – just as we did with Crick and Watson.

President Obama inspects a robot built in Virginia. Photograph: Getty Images.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

University of Glasgow
Show Hide image

Gravitational waves found: how we proved the final part of Einstein's theory of relativity

Our first direct glimpse of the dark universe – and the first direct evidence that black holes exist.

One hundred years ago Albert Einstein in his general theory of relativity predicted the existence of a dark side to the cosmos. He thought there were invisible “gravitational waves”, ripples in space-time produced by some of the most violent events in the cosmos – exploding stars, colliding black holes, perhaps even the Big Bang itself. For decades, astronomers have gathered strong corroborative evidence of the existence of these waves, but they have never been detected directly – until now. They were the last part of the general theory still to be verified.

Astronomers have used light to study the universe with optical telescopes for hundreds of years. We have expanded that view hugely since the middle of the 20th century, by building detectors and instruments sensitive to all the forms of what physicists mean by light: the electromagnetic spectrum, from gamma rays to radio. Yet the discovery of gravitational waves represents our first steps into studying the universe through the gravitational-wave spectrum, which exists independently from light, probing directly the effects of gravity as it spreads across the cosmos. It is the first page in a whole new chapter for astronomy, and science.

How we made the discovery

The discovery dates back to last September, when two giant measuring devices in different parts of the US called LIGO (Laser Interferometer Gravitational-Wave Observatory) caught a passing gravitational wave from the collision of two massive black holes in a faraway galaxy. LIGO is what we call an interferometer, consisting of two 4km “arms” set at right angles to each other, protected by concrete tubes, and a laser beam which is shone and reflected back and forth by mirrors at each end.

When a gravitational wave passes by, the stretching and squashing of space causes these arms alternately to lengthen and shrink, one getting longer while the other gets shorter and then vice versa. As the arms change lengths, the laser beams take a different time to travel through them. This means that the two beams are no longer “in step” and what we call an interference pattern is produced – hence the name interferometer.

The changes in the length of the arms are actually tiny – roughly one million millionth the width of a human hair. This is because the signal from a gravitational wave from far out in the cosmos is mind-bogglingly small by the time it reaches us. As if detecting this were not difficult enough, all manner of local disturbances on Earth make it worse, from the ground shaking to power-grid fluctuations; and instrumental “noises” that could mimic or indeed completely swamp a real signal from the cosmos.

To achieve the astounding sensitivity required, almost every aspect of the LIGO detectors' design has been upgraded over the past few years. We at the University of Glasgow led a consortium of UK institutions that played a key role – developing, constructing and installing the sensitive mirror suspensions at the heart of the LIGO detectors that were crucial to this first detection. The technology was based on our work on the earlier UK/German GEO600 detector. This turned LIGO into Advanced LIGO, arguably the most sensitive scientific instrument ever, to give us our first direct glimpse of the dark universe.

A long, long time ago...

What a glimpse it was. The two black holes that collided were respectively about 29 times and 36 times the mass of our sun (shown in the computer visualisation below). It is incidentally the first direct evidence that black holes exist, can exist in a pair, and can collide and merge. Comparing our data with Einstein’s predictions allowed us to test whether general relativity correctly describes such a collision – they passed with flying colours.

The black-hole collision

 

The merger occurred more than one billion light years from Earth, converting three times the mass of the sun into gravitational wave energy. In a fraction of a second, the power radiated through these waves was more than ten times greater than the combined luminosity of every star and galaxy in the observable universe. This was a truly cataclysmic event a long time ago in a galaxy far, far away. In Star Wars Darth Vader tells us not to “underestimate the power of the dark side”. This amazing discovery shows how right he was.

Of course our discovery isn’t just about checking if Einstein was right. Detecting gravitational waves will help us to probe the most extreme corners of the cosmos – the event horizon of a black hole, the innermost heart of a supernova, the internal structure of a neutron star: regions that are completely inaccessible to electromagnetic telescopes.

Could we ever harness gravitational waves for practical applications here on Earth? Could new insights about the dark universe help us, perhaps in the far future, not just to measure gravitational fields but to manipulate them, as imagined in the space colonies and wormholes of Christopher Nolan’s Interstellar? That is much harder to predict, but the lesson of history is that new phenomena we discover and explore frequently lead to disruptive technologies that come to underpin our everyday lives. It might take a few centuries, but I am confident the same will be true with gravitational waves.

The Conversation

Martin Hendry, Professor of Gravitational Astrophysics and Cosmology, University of Glasgow

This article was originally published on The Conversation. Read the original article.