We need to stop worrying and trust our robot researchers

The work of Francis Crick and James Watson gives us a vision of what's to come.

It’s now 60 years since the publication of the structure of DNA. As we celebrate the past, the work of Francis Crick and James Watson also gives us a vision of what’s to come. Their paper was not subjected to peer review, today’s gold standard for the validation of scientific research. Instead, it was discussed briefly over a lunch at the Athenaeum Club. In an editorial celebrating the anniversary, the journal Nature, which originally published the research, points out that this is “unthinkable now”.

However, peer review has always been somewhat patchy and it is becoming ever more difficult. This is the age of “big data”, in which scientists make their claims based on analysis of enormous amounts of information, often carried out by custom-written software. The peer review process, done on an unpaid, voluntary basis in researchers’ spare time, doesn’t have the capacity to go through all the data-analysis techniques. Reviewers have to rely on their intuition.

There are many instances of this leading science up the garden path but recently we were treated to a spectacular example in economics. In 2010, Harvard professors published what quickly became one of the most cited papers of the year. Simply put, it said that if your gross public debt is more than 90 per cent of your national income, you are going to struggle to achieve any economic growth.

Dozens of newspapers quoted the research, the Republican Party built its budget proposal on it and no small number of national leaders used it to justify their preferred policies. Which makes it all the more depressing that it has been unmasked as completely wrong.

The problem lay in poor data-handling. The researchers left out certain data points, gave questionable weight to parts of the data set and – most shocking of all – made a mistake in the programming of their Excel spreadsheet.

The Harvard paper was not peer-reviewed before publication. It was only when the researchers shared software and raw data with peers sceptical of the research that the errors came to light.

The era of big data in science will stand or fall on such openness and collaboration. It used to be that collaboration arose from the need to create data. Crick and Watson collaborated with Maurice Wilkins to gather the data they needed – from Rosalind Franklin’s desk drawer, without her knowledge or permission. That was what gave them their pivotal insight. However, as Mark R Abbott of Oregon State University puts it, “We are no longer data-limited but insight-limited.”

Gaining insights from the data flood will require a different kind of science from Crick’s and Watson’s and it may turn out to be one to which computers and laboratorybased robots are better suited than human beings. In another 60 years, we may well be looking back at an era when silicon scientists made the most significant discoveries.

A robot working in a lab at Aberystwyth University made the first useful computergenerated scientific contribution in 2009, in the field of yeast genomics. It came up with a hypothesis, performed experiments and reached a conclusion, then had its work published in the journal Science. Since then, computers have made further inroads. So far, most (not all) have been checked by human beings but that won’t be possible for long. Eventually, we’ll be taking their insights on trust and intuition stretched almost to breaking point – just as we did with Crick and Watson.

President Obama inspects a robot built in Virginia. Photograph: Getty Images.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

Show Hide image

With everything from iPhones to clothing turning monochrome, is the West afraid of colour?

If modern design appears particularly achromatic, it only reflects the "chromophobia" which courses through the history of Western thought.

To many English observers, 1666 – the year that the poet John Dryden christened the annus mirabilis, or “year of miracles” – wasn’t especially miraculous. The country was gripped by plague and, after a hot, dry summer, the Great Fire cut a swath through London. But for Isaac Newton, then still a student, it did prove illuminating. It was in 1666 that he first used prisms to prove that white light was not a pure, indissoluble substance but was made up of different coloured rays. This was such a profound challenge to the prevailing world-view that even Newton was shaken. “I perswade my self,” he wrote, “that this Assertion above the rest appears Paradoxical, & is with most difficulty admitted.”

The belief that colours are inferior and therefore naturally subordinate, rather than fundamental, was not new in Newton’s day, nor did it end with his discovery of spectral colour. A pattern of chromophobia – an aversion to colours – courses through Western thought.

Writing in the fourth century BC, Aristotle argued: “The most attractive colours would never yield as much pleasure as a definite image without colour.” For Renaissance artists, this idea was defined by the division between disegno, drawing or design, and colore. Disegno was the foundation of any serious artistic endeavour. The preference for achromatic, “intellectual” form is also evident in architecture. Despite rock-solid evidence from the 19th century proving that Greek marble buildings and statues were once brightly painted, the classical ideal has remained anachronistically bleached. And while modernist and postmodern architects have made some use of colour, the primacy of form is unmistakable in the work of everyone from John Pawson to Zaha Hadid and Toyo Ito.

A broad cultural dislike of colour is curious because, speaking in evolutionary terms, our ability to see it has been crucial to our success. Colour vision in primates developed between 38 and 65 million years ago and makes us better able to find ripening red and yellow fruits amid green foliage. Neurons devoted to visual processing occupy much more of our neocortex real estate than those devoted to hearing or touch. Estimates vary but the Optical Society of America has suggested that it may be possible for humans to distinguish between up to ten million different shades.

And we have put this skill to good use. Bold colours have been used by many cultures to mark temporal and spiritual power. Tyrian purple, a rich, reddish dye said to resemble clotted blood, was made using an extract from two different kinds of Mediterranean shellfish and was beloved by emperors in the ancient world. A single pound of dyed cloth would cost a skilled craftsman three years’ wages and became steadily more expensive as the shellfish became rarer.

But even as such saturated colours were coveted, they also elicited disgust. The manufacture of many, including Tyrian purple, involved ingredients such as stale urine and dung. Dye and paintworks were relegated to the urban fringes. Increasingly, the wearing of bright colours was seen as vainglorious and ungodly. Protestants indicated their humility by whitewashing over jewel-coloured murals and smashing stained-glass windows in churches, and by restricting their sartorial palette predominantly to black. An echo prevails today in men’s suits: colours are largely confined to small accessories such as ties and white shirts are held up as the ne plus ultra of refined sophistication. (The late Apple co-founder Steve Jobs went one better, opting for a uniform of identical black turtlenecks.)

One reason for this distrust is that colours are difficult to conceptualise. Do they exist physically, or only in our brains? Does everyone see them the same way? Colours have been maligned as chaotic, fickle, irrational and female. The early Christian thinker St Augustine of Hippo accused them of “a seductive and dangerous sweetness”.

Our ambivalence to colour, however, has profited white. Like black, white has not been classed as a real colour since Newton. It has almost become an anti-colour. Take Apple, for example. Although Sir Jony Ive is usually credited with the company’s love for monochrome products (it was certainly Ive who brought this to its apogee), the trend predates his arrival. It can be traced back to the “Snow White” design language developed in the 1980s. Today, as consumer neophilia demands that technology be continually refreshed, Apple’s higher-end products are available in the smallest range of colours – usually just white, black and, for the Asian market, gold – while those lower down come in a slew of fruity brights.

White is not only big business for Apple. In 2014, a Californian man named Walter Liew was found guilty of 20 counts of economic espionage and sentenced to 15 years in jail for selling the secret to a very special shade of titanium-oxide white, used in everything from luxury cars to tennis courts, to Chinese firms for $28m.

Perhaps the final word on the matter should go to Le Corbusier. In 1925, the great modernist recommended that all interior walls should be whitewashed, to act as a moral and spiritual restorative. But he wasn’t just advocating white for white’s sake: although he continued to dabble with colour, he disapproved of it, too. “Let us leave to the clothes-dyers,” he wrote, “the sensory jubilations of the paint tube.”

“The Secret Lives of Colour” (John Murray) by Kassia St Clair will be published on 20 October

This article first appeared in the 26 May 2016 issue of the New Statesman, The Brexit odd squad