How YouTube can save the world

Janet Jackson's accidental breast exposure has led indirectly to earth avoiding deadly asteroids.

Will YouTube help to save humanity in the event of an asteroid impact?
Image: Getty

When Hollywood rewrites this story, it will become known as the day that Janet Jackson saved planet earth. According to company legend, YouTube was created after one of its inventors had trouble accessing a video of Jackson’s moment of “wardrobe malfunction” breast exposure during the 2004 Super Bowl. The video-sharing website’s latest achievement is to become the source of scientific data that might help us evade the next big asteroid threat.

You might remember the last big one: it exploded in the air 27 kilometres above Chelyabinsk in the Russian Urals on 15 February this year. The explosion was equivalent to the detonation of 500,000 tonnes of TNT – enough to damage buildings and injure several hundred people. Perhaps not enough to get itself a Hollywood re-enactment, though.

Fortunately, the asteroid’s passage through earth’s atmosphere made it glow far brighter than the early-morning sun, causing locals to whip out their phones and record its flight. The high incidence of insurance fraud in Russia also helped – many cars are equipped with dashboard cameras, which recorded the event.

On 6 November, a group of scientists published an analysis of these videos. They had discovered that our risk of being hit by similar asteroids is ten times higher than we thought. The researchers were able to deduce the asteroid’s mass from its flight path. It was twice as heavy as scientists’ initial estimates. We need to pay attention to the threat from orbiting objects much smaller than those we have been keeping an eye on.

Things were much easier when we only needed to worry about the larger rocks orbiting the sun. The cut-off used to be about one kilometre in diameter; we had concluded that anything smaller would most likely burn up in our atmosphere and inflict near-negligible damage. We know the orbits of all these big rocks; we don’t, however, have a clue where the millions of smaller rocks are, or whether they might hit earth at any point. The YouTube-derived data suggests that we should start to find out and is certain to inform the activities of the Nasa asteroid-tracking telescope due to come online in 2015.

Atlas (Asteroid Terrestrial-Impact Last Alert System) will need broad shoulders: for the foreseeable future, it will be the only means by which we can reliably detect an imminent impact with these newly threatening smaller asteroids. Existing early-warning systems watch only certain patches of the sky and aren’t great at picking out objects that are smaller than one kilometre.

Nasa has plans to put an asteroid-hunting camera called NeoCam into orbit (no launch date yet) and a group of concerned citizens is raising money to build Sentinel, a similar eye in the sky. Until either of those are deployed, it’ll be down to Atlas.

Atlas will give us a week’s warning of any asteroid likely to collide with earth with an impact equivalent to the detonation of several megatonnes of TNT. If the asteroid is bigger, we should know about it three weeks in advance.

If you think that will give us time to send swarthy heroes up to attach a nuclear bomb to the asteroid and deflect it away from its collision course, think again. There is no agency on earth with the mandate to do this – and certainly no one with the necessary equipment or expertise. So all you can expect is plenty of time to charge your phone’s battery and ensure you are the first to get the video of its arrival on to YouTube. Then you have to hope there’ll still be some scientists around to appreciate your efforts.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 13 November 2013 issue of the New Statesman, The New Exodus

Flickr: Alex Brown
Show Hide image

The rise of the racist robots

Advances in artifical intelligence are at risk of being held back by human ignorance. 

As far as dystopian visions of the future go, you can’t get much worse than the words “Nazi robots”. But while films, TV and games might conjure up visions of ten-foot titanium automatons with swastikas for eyes, the real racist robots – much like their human counterparts – are often much more subtle.

Last night, Stephen Hawking warned us all about artificial intelligence. Speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, Hawking labelled AI “either the best, or the worst thing, ever to happen to humanity.” It’s an intelligent warning – many experts are already worried that AI will destroy the world of work – but it homogenises humans. What is the “best thing” for some may be the “worst thing” for others, and nowhere is this clearer than the issue of race.

It started with the Nikon Coolpix S630. In 2009, Joz Wang, a Taiwanese-American, bought the camera for her mother, and was shocked when a message on the screen asked “Did someone blink?” after she took a picture of herself. In July 2015, Google Photos came under fire after its image recognition software tagged Jacky Alciné and his friend, both of whom are black, as “Gorillas”. In September of the same year, a video showing an automatic soap dispenser refusing to respond to a black hand went viral. You might dismiss these examples as harmless bugs or honest mistakes, but they still tell us a lot about the way the technology industry tests its products – and therefore, which customers it values.

But then it got worse. This year alone, the first beauty contest judged by AI had only one dark-skinned winner out of 44, Princeton academics discovered that a popular language-processing algorithm found “black” names unpleasant, and an American software used to predict future criminals rated black people as higher risk. And who can forget Microsoft’s ill-fated chatbot Tay? The bot – which was taught to converse by mimicking other Twitter users’ speech – was taken offline after 16 hours because it began spurting sexist and racist messages.

We could sit here and debate whether an AI can truly be considered racist, but it wouldn’t change the outcome of events. Even though these algorithms and machines aren’t explicitly programmed to be racist – and their designers usually aren’t prejudiced themselves – it doesn’t change the consequences of their use. The more and more dominant AI becomes in our world, the more problematic this will become. Imagine the consequences of racial bias in AI job-screening tools, dating sites, mortgage advisers, insurance companies, and so on.

“Bias in AI systems is a vital issue,” says Calum Chace, the best-selling author of Surviving AI and a speaker on how the technology will affect our future. “We humans are deplorably biased – even the best of us. AIs can do better, and we need them to, but we have to ensure their inputs are unbiased.”

To do this, Chace explains, we need to figure out the root of the “racism”. Pretty much no one is deliberately designing their AI to be racist – Google’s chief social architect, Yonatan Zunger, responded quickly to the “Gorillas” incident, Tweeting “This is 100% Not OK.” But the fact that only two per cent of Google employees are black is perceived as part of the problem, as in many of these instances the technology was designed with white people in mind. “The chief technology officer of the company that ran the beauty contest explained that its database had a lot more white people than Indian people and that it was ‘possible’ that because of that their algorithm was biased,” says Chace.

There are also technical solutions. Chace explains that machine learning systems work best when they are fed huge quantities of data. “It is likely that the system was trained on too few images – a mere 6,000, compared with the many millions of images used in more successful machine learning systems. As a senior Googler said, machine learning becomes ‘unreasonably effective’ when trained on huge quantities of data. Six thousand images are probably just not enough.”

Now more than ever, it is important to straighten out these issues before AI becomes even more powerful and prevalent in our world. It is one thing for intelligent machines to drive our cars and provide our insurance quotes, and another thing for them to decide who lives and dies when it comes to war. “Lethal autonomous weapons systems (LAWS) will increasingly be deployed in war, and in other forms of conflict," says Chase. "This will happen whether we like it or not, because machines can make decisions better and faster than humans.  A drone which has to continually “phone home” before engaging its opponent will quickly be destroyed by a fully autonomous rival.  If we get it right, these deadly machines will reduce the collateral damage imposed by conflict. But there is plenty of work to be done before we get to that enviable position.”

Whether or not this vision of the future comes to fruition in 10, 20, or 100 years, it is important to prepare for it – and other possibilities. Despite constant advances in technology, there is still no such thing as a "concious" robot that thinks and feels as humans do. In itself an AI cannot be racist. To solve this problem, then, it is humans that need fixing. 

Amelia Tait is a technology and digital culture writer at the New Statesman.