If everything's being automated, let's hope we'll like our robots

The robots may be taking our jobs - even making our coffee - but that doesn't mean we'll be fond of them.

How do you make the inevitable robot uprising easier to stomach? Those thinking we were guaranteed a future of flipping burgers and making coffee for each other will be disheartened to hear that coffee company Briggo has managed to solve the latter of those issues with an autonomous kiosk. Christopher Mims at Quarts explains:

Inside, protected by stainless steel walls and a thicket of patents, there is a secret, proprietary viscera of pipes, storage vessels, heating instruments, robot arms and 250 or so sensors that together do everything a human barista would do if only she had something like perfect self-knowledge. “How is my milk steamer performing? Am I a half-degree off in my brewing temperature? Is my water pressure consistent? Is there any residue buildup on my brewing chamber that might require me to switch to a backup system?”

The Briggo coffee kiosk knows how to make a perfect coffee because it was “trained” by an award-winning barista, Patrick Pierce. He's since left the company, but no matter: as in the techno-utopian Singularity, whose adherents believe that some day we will all upload our brains to computers, once a barista's essence has been captured by Briggo, his human form is just a legacy system.

That last bit will sound familiar to Star Wars fans - Patrick Pierce is Starbucks' Jango Fett, and his wood-panelled Yves Behar-designed clones are the stormtrooper clones of high street coffee. It's not just able to match us, it's able to match the absolute best of us.

It's worth reading Mims' piece in full, as he goes on to explain that Nespresso - that little coffee capsule system - has replaced the coffee machines in many of Europe's Michelin-starred restaurants. Anyone, with minimal training, can make a consistently top-class coffee using those capsule. Why bother training a barista? And, as the Brisso kiosk shows, why even bother hiring a human to put the capsule into the machine?

For those who actually enjoy human interaction at places like coffee shops, this is a sad thing. Robots aren't friends. A designer's basic job is to make things that humans can and want to use, and that's going to start meaning “making robots that we want to interact with”.

To whit, here's a video some researchers at MIT have made demonstrating their idea for a helpful, flying drone that people can call with their smartphones. It's a bit like a tour guide:

Drones, of course, have a terrible reputation, because for every one that is put to good use delivering burritos, there are ones being used to bomb people without warning in places like Pakistan and Yemen. As Dezeen tells it:

Yaniv Jacob Turgeman, research and development lead at Senseable City Lab, said SkyCall was designed to counter the sinister reputation of drones, and show they can be useful. "Our imaginations of flying sentient vehicles are filled with dystopian notions of surveillance and control, but this technology should be tasked with optimism," he told Dezeen.

That optimism comes in the form of a friendly, female - but still distinctly robotic - voice. It's like something from a computer game. Is it particularly reassuring? Not massively. It doesn't give off that trustworthy vibe you'd get from another human, or even a paper map.

Trustworthiness is a theme that's been explored in science fiction for years and years, of course, from Fritz Lang's Metropolis to Will Smith's I, Robot, so it's not surprising to see designers begin to tackle it. You also get the idea of the "uncanny valley" thrown around - if you plot a graph of "human likeness" on the x-axis of a graph and "how real it looks to people" on the y-axis, you get a steady correlation that collapses (into a "valley" shape") just before it reaches actual human likeness. That is, the objects that creep us out the most are the things that look closest to human as possible while just falling short. It's all a way of saying that creating things that look like humans, for situations where we expect humans, is tricky.

Studies that have looked at what kind of human-likeness we want in our robots have given rise to some surprising results. Akanksha Prakash from Georgia Tech carried out one such study, and its results (published earlier this month) show that, often, participants don't actually want to be helped by human-like robots. The more delicate the task - like having help in the bath - the more divisive the opinions on whether something human-like is better.

There's also a generational divide, with younger people not minding things that look like human-robot hybrids around the house, whereas older people prefer the straightforwardly human. There are clearly a lot of psychological factors at work that are going to prove a challenge to designers hoping that their product - whatever it is - becomes a hit.

Perhaps when the robots arrive they'll still have some human-like features, in the same way that some smartphones still use yellow, lined paper to give people a clue that the app they've opened is for making notes - or like wood-panelling on the side of an autonomous coffee kiosk.

You'd rather play with the one on the right, wouldn't you? (Photo: Getty)

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Getty
Show Hide image

“Stinking Googles should be killed”: why 4chan is using a search engine as a racist slur

Users of the anonymous forum are targeting Google after the company introduced a programme for censoring abusive language.

Contains examples of racist language and memes.

“You were born a Google, and you are going to die a Google.”

Despite the lack of obscenity and profanity in this sentence, you have probably realised it was intended to be offensive. It is just one of hundreds of similar messages posted by the users of 4chan’s Pol board – an anonymous forum where people go to be politically incorrect. But they haven’t suddenly seen the error of their ways about using the n-word to demean their fellow human beings – instead they are trying to make the word “Google” itself become a racist slur.

In an undertaking known as “Operation Google”, some 4chan users are resisting Google’s latest artificial intelligence program, Conversation AI, by swapping smears for the names of Google products. Conversation AI aims to spot and flag offensive language online, with the eventual possibility that it could automatically delete abusive comments. The famously outspoken forum 4chan, and the similar website 8chan, didn’t like this, and began their campaign which sees them refer to “Jews” as “Skypes”, Muslims as “Skittles”, and black people as “Googles”.

If it weren’t for the utterly abhorrent racism – which includes users conflating Google’s chat tool “Hangouts” with pictures of lynched African-Americans – it would be a genius idea. The group aims to force Google to censor its own name, making its AI redundant. Yet some have acknowledged this might not ultimately work – as the AI will be able to use contextual clues to filter out when “Google” is used positively or pejoratively – and their ultimate aim is now simply to make “Google” a racist slur as revenge.


Posters from 4chan

“If you're posting anything on social media, just casually replace n****rs/blacks with googles. Act as if it's already a thing,” wrote one anonymous user. “Ignore the company, just focus on the word. Casually is the important word here – don't force it. In a month or two, Google will find themselves running a company which is effectively called ‘n****r’. And their entire brand is built on that name, so they can't just change it.”

There is no doubt that Conversation AI is questionable to anyone who values free speech. Although most people desire a nicer internet, it is hard to agree that this should be achieved by blocking out large swathes of people, and putting the power to do so in the hands of one company. Additionally, algorithms can’t yet accurately detect sarcasm and humour, so false-positives are highly likely when a bot tries to identify whether something is offensive. Indeed, Wired journalist Andy Greenberg tested Conversation AI out and discovered it gave “I shit you not” 98 out of 100 on its personal attack scale.

Yet these 4chan users have made it impossible to agree with their fight against Google by combining it with their racism. Google scores the word “moron” 99 out of 100 on its offensiveness scale. Had protestors decided to replace this – or possibly even more offensive words like “bitch” or “motherfucker” – with “Google”, pretty much everyone would be on board.

Some 4chan users are aware of this – and indeed it is important not to consider the site a unanimous entity. “You're just making yourselves look like idiots and ruining any legitimate effort to actually do this properly,” wrote one user, while some discussed their concerns that “normies” – ie. normal people – would never join in. Other 4chan users are against Operation Google as they see it as self-censorship, or simply just stupid.


Memes from 4chan

But anyone who disregards these efforts as the work of morons (or should that be Bings?) clearly does not understand the power of 4chan. The site brought down Microsoft’s AI Tay in a single day, brought the Unicode swastika (卐) to the top of Google’s trends list in 2008, hacked Sarah Palin’s email account, and leaked a large number of celebrity nudes in 2014. If the Ten Commandments were rewritten for the modern age and Moses took to Mount Sinai to wave two 16GB Tablets in the air, then the number one rule would be short and sweet: Thou shalt not mess with 4chan.

It is unclear yet how Google will respond to the attack, and whether this will ultimately affect the AI. Yet despite what ten years of Disney conditioning taught us as children, the world isn’t split into goodies and baddies. While 4chan’s methods are deplorable, their aim of questioning whether one company should have the power to censor the internet is not.

Google also hit headlines this week for its new “YouTube Heroes” program, a system that sees YouTube users rewarded with points when they flag offensive videos. It’s not hard to see how this kind of crowdsourced censorship is undesirable, particularly again as the chance for things to be incorrectly flagged is huge. A few weeks ago, popular YouTubers also hit back at censorship that saw them lose their advertising money from the site, leading #YouTubeIsOverParty to trend on Twitter. Perhaps ultimately, 4chan didn't need to go on a campaign to damage Google's name. It might already have been doing a good enough job of that itself.

Google has been contacted for comment.

Amelia Tait is a technology and digital culture writer at the New Statesman.