Show Hide image

The dress of many colours: is it blue and black or white and gold?

A recent debate on the colour of a dress has broken the internet. But is it all just a visual illusion? 

The infamous dress, in all its variety. Photo: swiked/Tumblr

The internet went Armageddon last night over the colour of an (let’s face it) ugly dress which Tumblr user swiked uploaded. The disagreement in colours divided the social media world: is it blue and black or white and gold or something in between?

Photo: swiked/Tumblr

(By the way, it’s clearly blue and black.)

When things start to question our existence we turn to science to tell us things are going to be ok. Let’s turn to how our brains translate colour through our retinas:

Visible light can be broken down into various wavelengths which correspond to different perceivable colours. This all depends on what colour(s) the object is reflecting. This reflective light enters through the eye lens and hits a light sensitive layer of tissue called retina in the back of the eye where a cascade of neural messages are sent to the visual cortex – the part of the brain that processes visual information. However, the quality of light that penetrates into our retina plays a big part.

Luminance is the intensity of light emitted from a surface per unit area of light travelling in a given direction. So the brain has to work out how much of the luminance (or lack thereof) is caused by the colour of the square and how much is caused by the shadows. “In the case of the dress, some people are deciding that there is a fair amount of illumination on a blue and black (or less reflective) dress. Other people are deciding that it is less illumination on a white and gold dress (it is in shadow, but more reflective),” said Cedar Riener, an associate professor of psychology at Randolph-Macon College in a BuzzFeed interview.

An example is the famous Adelson checkerboard optical illusion, in which square A and square B are same shade of grey: 



So why do different people’s brains interpret light differently? Humans have evolved to see in the daylight. Typical daylight extends from blue-white at noon to pinkish-red at dawn. Bevil Conway, a neuroscientist who studies colour at vision at Wellesley College told Wired: “What’s happening here is your visual system is looking at this thing, and you’re trying to discount the chromatic bias of the daylight axis. So people either discount the blue side, in which case they end up seeing white and gold, or discount the gold side, in which case they end up with blue and black.”

Conway also suggests that the white-gold or blue-black bias could be linked to whether we prefer daylight or night time. So those who perceive the dress as white-gold might be interpreting it as though it's in blue natural lighting, and those who perceive it as blue-black might be interpreting it as though it's in yellow artificial lighting. “I bet night owls are more likely to see it as blue-black,” Conway says.

Based on correct white-balancing, we can confirm that the dress is blue and black (sorry white and gold die-hards), but ultimately, visual perception is in the eye of the brain holder.

Luckily for the New Statesman, Ed Miliband established Labour's position on the blue-black and white-gold spectrum: 

Tosin Thompson writes about science and was the New Statesman's 2015 Wellcome Trust Scholar. 

Flickr: Alex Brown
Show Hide image

The rise of the racist robots

Advances in artifical intelligence are at risk of being held back by human ignorance. 

As far as dystopian visions of the future go, you can’t get much worse than the words “Nazi robots”. But while films, TV and games might conjure up visions of ten-foot titanium automatons with swastikas for eyes, the real racist robots – much like their human counterparts – are often much more subtle.

Last night, Stephen Hawking warned us all about artificial intelligence. Speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking labelled AI “either the best, or the worst thing, ever to happen to humanity.” It’s an intelligent warning – many experts are already worried that AI will destroy the world of work – but it homogenises humans. What is the “best thing” for some may be the “worst thing” for others, and nowhere is this clearer than the issue of race.

It started with the Nikon Coolpix S630. In 2009, Joz Wang, a Taiwanese-American, bought the camera for her mother, and was shocked when a message on the screen asked “Did someone blink?” after she took a picture of herself. In July 2015, Google Photos came under fire after its image recognition software tagged Jacky Alciné and his friend, both of whom are black, as “Gorillas”. In September of the same year, a video showing an automatic soap dispenser refusing to respond to a black hand went viral. You might dismiss these examples as harmless bugs or honest mistakes, but they still tell us a lot about the way the technology industry tests its products – and therefore, which customers it values.

But then it got worse. This year alone, the first beauty contest judged by AI had only one dark-skinned winner out of 44, Princeton academics discovered that a popular language-processing algorithm found “black” names unpleasant, and an American software used to predict future criminals rated black people as higher risk. And who can forget Microsoft’s ill-fated chatbot Tay? The bot – which was taught to converse by mimicking other Twitter users’ speech – was taken offline after 16 hours because it began spurting sexist and racist messages.

We could sit here and debate whether an AI can truly be considered racist, but it wouldn’t change the outcome of events. Even though these algorithms and machines aren’t explicitly programmed to be racist – and their designers usually aren’t prejudiced themselves – it doesn’t change the consequences of their use. The more and more dominant AI becomes in our world, the more problematic this will become. Imagine the consequences of racial bias in AI job-screening tools, dating sites, mortgage advisers, insurance companies, and so on.

“Bias in AI systems is a vital issue,” says Calum Chace, the best-selling author of Surviving AI and a speaker on how the technology will affect our future. “We humans are deplorably biased – even the best of us. AIs can do better, and we need them to, but we have to ensure their inputs are unbiased.”

To do this, Chace explains, we need to figure out the root of the “racism”. Pretty much no one is deliberately designing their AI to be racist – Google’s chief social architect, Yonatan Zunger, responded quickly to the “Gorillas” incident, Tweeting “This is 100% Not OK.” But the fact that only two per cent of Google employees are black is perceived as part of the problem, as in many of these instances the technology was designed with white people in mind. “The chief technology officer of the company that ran the beauty contest explained that its database had a lot more white people than Indian people and that it was ‘possible’ that because of that their algorithm was biased,” says Chace.

There are also technical solutions. Chace explains that machine learning systems work best when they are fed huge quantities of data. “It is likely that the system was trained on too few images – a mere 6,000, compared with the many millions of images used in more successful machine learning systems. As a senior Googler said, machine learning becomes ‘unreasonably effective’ when trained on huge quantities of data. Six thousand images are probably just not enough.”

Now more than ever, it is important to straighten out these issues before AI becomes even more powerful and prevalent in our world. It is one thing for intelligent machines to drive our cars and provide our insurance quotes, and another thing for them to decide who lives and dies when it comes to war. “Lethal autonomous weapons systems (LAWS) will increasingly be deployed in war, and in other forms of conflict," says Chase. "This will happen whether we like it or not, because machines can make decisions better and faster than humans.  A drone which has to continually “phone home” before engaging its opponent will quickly be destroyed by a fully autonomous rival.  If we get it right, these deadly machines will reduce the collateral damage imposed by conflict. But there is plenty of work to be done before we get to that enviable position.”

Whether or not this vision of the future comes to fruition in 10, 20, or 100 years, it is important to prepare for it – and other possibilities. Despite constant advances in technology, there is still no such thing as a "concious" robot that thinks and feels as humans do. In itself an AI cannot be racist. To solve this problem, then, it is humans that need fixing. 

Amelia Tait is a technology and digital culture writer at the New Statesman.