We need to stop worrying and trust our robot researchers

The work of Francis Crick and James Watson gives us a vision of what's to come.

It’s now 60 years since the publication of the structure of DNA. As we celebrate the past, the work of Francis Crick and James Watson also gives us a vision of what’s to come. Their paper was not subjected to peer review, today’s gold standard for the validation of scientific research. Instead, it was discussed briefly over a lunch at the Athenaeum Club. In an editorial celebrating the anniversary, the journal Nature, which originally published the research, points out that this is “unthinkable now”.

However, peer review has always been somewhat patchy and it is becoming ever more difficult. This is the age of “big data”, in which scientists make their claims based on analysis of enormous amounts of information, often carried out by custom-written software. The peer review process, done on an unpaid, voluntary basis in researchers’ spare time, doesn’t have the capacity to go through all the data-analysis techniques. Reviewers have to rely on their intuition.

There are many instances of this leading science up the garden path but recently we were treated to a spectacular example in economics. In 2010, Harvard professors published what quickly became one of the most cited papers of the year. Simply put, it said that if your gross public debt is more than 90 per cent of your national income, you are going to struggle to achieve any economic growth.

Dozens of newspapers quoted the research, the Republican Party built its budget proposal on it and no small number of national leaders used it to justify their preferred policies. Which makes it all the more depressing that it has been unmasked as completely wrong.

The problem lay in poor data-handling. The researchers left out certain data points, gave questionable weight to parts of the data set and – most shocking of all – made a mistake in the programming of their Excel spreadsheet.

The Harvard paper was not peer-reviewed before publication. It was only when the researchers shared software and raw data with peers sceptical of the research that the errors came to light.

The era of big data in science will stand or fall on such openness and collaboration. It used to be that collaboration arose from the need to create data. Crick and Watson collaborated with Maurice Wilkins to gather the data they needed – from Rosalind Franklin’s desk drawer, without her knowledge or permission. That was what gave them their pivotal insight. However, as Mark R Abbott of Oregon State University puts it, “We are no longer data-limited but insight-limited.”

Gaining insights from the data flood will require a different kind of science from Crick’s and Watson’s and it may turn out to be one to which computers and laboratorybased robots are better suited than human beings. In another 60 years, we may well be looking back at an era when silicon scientists made the most significant discoveries.

A robot working in a lab at Aberystwyth University made the first useful computergenerated scientific contribution in 2009, in the field of yeast genomics. It came up with a hypothesis, performed experiments and reached a conclusion, then had its work published in the journal Science. Since then, computers have made further inroads. So far, most (not all) have been checked by human beings but that won’t be possible for long. Eventually, we’ll be taking their insights on trust and intuition stretched almost to breaking point – just as we did with Crick and Watson.

President Obama inspects a robot built in Virginia. Photograph: Getty Images.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

GERRY BRAKUS
Show Hide image

“Like a giant metal baby”: whether you like it or not, robots are already part of our world

For centuries, we have built replacements for ourselves. But are we ready to understand the implications?

There were no fireworks to dazzle the crowd lining the streets of Alexandria to celebrate Cleopatra’s triumphant return to the city in 47BC. Rather, there was a four-and-a-half-metre-tall robotic effigy of the queen, which squirted milk from mechanical bosoms on to the heads of onlookers. Cleopatra, so the figure was meant to symbolise, was a mother to her people.

It turns out that robots go back a long way. At the “Robots” exhibition now on at the Science Museum in London, a clockwork monk from 1560 walks across a table while raising a rosary and crucifix, its lips murmuring in devotion. It is just one of more than 100 exhibits, drawn from humankind’s half-millennium-long obsession with creating mechanical tools to serve us.

“We defined a robot as a machine which looks lifelike, or behaves in lifelike ways,” Ben Russell, the lead curator of the exhibition, told me. This definition extends beyond the mechanisms of the body to include those of the mind. This accounts for the inclusion of robots such as “Cog”, a mash-up of screws, motors and scrap metal that is, the accompanying blurb assures visitors, able to learn about the world by poking at colourful toys, “like a giant metal baby”.

The exhibits show that there has long existed in our species a deep desire to rebuild ourselves from scratch. That impulse to understand and replicate the systems of the body can be seen in some of the earliest surviving examples of robotics. In the 16th century, the Catholic Church commissioned some of the first anthropomorphic mechanical machines, suggesting that the human body had clockwork-like properties. Models of Jesus bled and automatons of Satan roared.

Robots have never been mere anatomical models, however. In the modern era, they are typically employed to work on the so-called 4D tasks: those that are dull, dumb, dirty, or dangerous. A few, such as Elektro, a robot built in Ohio in the late 1930s, which could smoke a cigarette and blow up balloons, were showmen. Elektro toured the US in 1950 and had a cameo in an adult movie, playing a mechanical fortune-teller picking lottery numbers and racehorses.

Nevertheless, the idea of work is fundamental to the term “robot”. Karel Čapek’s 1920s science-fiction play RUR, credited with introducing the word to the English language, depicts a cyborg labour force that rebels against its human masters. The Czech word robota means “forced labour”. It is derived from rab, which means “slave”.

This exhibition has proved timely. A few weeks before it opened in February, a European Parliament commission demanded that a set of regulations be drawn up to govern the use and creation of robots. In early January, Reid Hoffman and Pierre Omidyar, the founders of LinkedIn and eBay respectively, contributed $10m each to a fund intended to prevent the development of artificial intelligence applications that could harm society. Human activity is increasingly facilitated, monitored and analysed by AI and robotics.

Developments in AI and cybernetics are converging on the creation of robots that are free from direct human oversight and whose impact on human well-being has been, until now, the stuff of science fiction. Engineers have outpaced philosophers and lawmakers, who are still grappling with the implications as autonomous cars roll on to our roads.

“Is the world truly ready for a vehicle that can drive itself?” asked a recent television advert for a semi-autonomous Mercedes car (the film was pulled soon afterwards). For Mercedes, our answer to the question didn’t matter much. “Ready or not, the future is here,” the ad concluded.

There have been calls to halt or reverse advances in robot and AI development. Stephen Hawking has warned that advanced AI “could spell the end of the human race”. The entrepreneur Elon Musk agreed, stating that AI presents the greatest existential threat to mankind. The German philosopher Thomas Metzinger has argued that the prospect of increasing suffering in the world through this new technology is so morally awful that we should cease to build artificially intelligent robots immediately.

Others counter that it is impossible to talk sensibly about robots and AI. After all, we have never properly settled on the definitions. Is an inkjet printer a robot? Does Apple’s Siri have AI? Today’s tech miracle is tomorrow’s routine tool. It can be difficult to know whether to take up a hermit-like existence in a wifi-less cave, or to hire a Japanese robo-nurse to swaddle our ageing parents.

As well as the fear of what these machines might do to us if their circuits gain sentience, there is the pressing worry of, as Russell puts it, “what we’re going to do with all these people”. Autonomous vehicles, say, could wipe out the driving jobs that have historically been the preserve of workers displaced from elsewhere.

“How do we plan ahead and put in place the necessary political, economic and social infrastructure so that robots’ potentially negative effects on society are mitigated?” Russell asks. “It all needs to be thrashed out before it becomes too pressing.”

Such questions loom but, in looking to the past, this exhibition shows how robots have acted as society’s mirrors, reflecting how our hopes, dreams and fears have changed over the centuries. Beyond that, we can perceive our ever-present desires to ease labour’s burden, to understand what makes us human and, perhaps, to achieve a form of divinity by becoming our own creators. 

This article first appeared in the 23 March 2017 issue of the New Statesman, Trump's permanent revolution