If everything's being automated, let's hope we'll like our robots

The robots may be taking our jobs - even making our coffee - but that doesn't mean we'll be fond of them.

How do you make the inevitable robot uprising easier to stomach? Those thinking we were guaranteed a future of flipping burgers and making coffee for each other will be disheartened to hear that coffee company Briggo has managed to solve the latter of those issues with an autonomous kiosk. Christopher Mims at Quarts explains:

Inside, protected by stainless steel walls and a thicket of patents, there is a secret, proprietary viscera of pipes, storage vessels, heating instruments, robot arms and 250 or so sensors that together do everything a human barista would do if only she had something like perfect self-knowledge. “How is my milk steamer performing? Am I a half-degree off in my brewing temperature? Is my water pressure consistent? Is there any residue buildup on my brewing chamber that might require me to switch to a backup system?”

The Briggo coffee kiosk knows how to make a perfect coffee because it was “trained” by an award-winning barista, Patrick Pierce. He's since left the company, but no matter: as in the techno-utopian Singularity, whose adherents believe that some day we will all upload our brains to computers, once a barista's essence has been captured by Briggo, his human form is just a legacy system.

That last bit will sound familiar to Star Wars fans - Patrick Pierce is Starbucks' Jango Fett, and his wood-panelled Yves Behar-designed clones are the stormtrooper clones of high street coffee. It's not just able to match us, it's able to match the absolute best of us.

It's worth reading Mims' piece in full, as he goes on to explain that Nespresso - that little coffee capsule system - has replaced the coffee machines in many of Europe's Michelin-starred restaurants. Anyone, with minimal training, can make a consistently top-class coffee using those capsule. Why bother training a barista? And, as the Brisso kiosk shows, why even bother hiring a human to put the capsule into the machine?

For those who actually enjoy human interaction at places like coffee shops, this is a sad thing. Robots aren't friends. A designer's basic job is to make things that humans can and want to use, and that's going to start meaning “making robots that we want to interact with”.

To whit, here's a video some researchers at MIT have made demonstrating their idea for a helpful, flying drone that people can call with their smartphones. It's a bit like a tour guide:

Drones, of course, have a terrible reputation, because for every one that is put to good use delivering burritos, there are ones being used to bomb people without warning in places like Pakistan and Yemen. As Dezeen tells it:

Yaniv Jacob Turgeman, research and development lead at Senseable City Lab, said SkyCall was designed to counter the sinister reputation of drones, and show they can be useful. "Our imaginations of flying sentient vehicles are filled with dystopian notions of surveillance and control, but this technology should be tasked with optimism," he told Dezeen.

That optimism comes in the form of a friendly, female - but still distinctly robotic - voice. It's like something from a computer game. Is it particularly reassuring? Not massively. It doesn't give off that trustworthy vibe you'd get from another human, or even a paper map.

Trustworthiness is a theme that's been explored in science fiction for years and years, of course, from Fritz Lang's Metropolis to Will Smith's I, Robot, so it's not surprising to see designers begin to tackle it. You also get the idea of the "uncanny valley" thrown around - if you plot a graph of "human likeness" on the x-axis of a graph and "how real it looks to people" on the y-axis, you get a steady correlation that collapses (into a "valley" shape") just before it reaches actual human likeness. That is, the objects that creep us out the most are the things that look closest to human as possible while just falling short. It's all a way of saying that creating things that look like humans, for situations where we expect humans, is tricky.

Studies that have looked at what kind of human-likeness we want in our robots have given rise to some surprising results. Akanksha Prakash from Georgia Tech carried out one such study, and its results (published earlier this month) show that, often, participants don't actually want to be helped by human-like robots. The more delicate the task - like having help in the bath - the more divisive the opinions on whether something human-like is better.

There's also a generational divide, with younger people not minding things that look like human-robot hybrids around the house, whereas older people prefer the straightforwardly human. There are clearly a lot of psychological factors at work that are going to prove a challenge to designers hoping that their product - whatever it is - becomes a hit.

Perhaps when the robots arrive they'll still have some human-like features, in the same way that some smartphones still use yellow, lined paper to give people a clue that the app they've opened is for making notes - or like wood-panelling on the side of an autonomous coffee kiosk.

You'd rather play with the one on the right, wouldn't you? (Photo: Getty)

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Getty
Show Hide image

Fark.com’s censorship story is a striking insight into Google’s unchecked power

The founder of the community-driven website claims its advertising revenue was cut off for five weeks.

When Microsoft launched its new search engine Bing in 2009, it wasted no time in trying to get the word out. By striking a deal with the producers of the American teen drama Gossip Girl, it made a range of beautiful characters utter the words “Bing it!” in a way that fell clumsily on the audience’s ears. By the early Noughties, “search it” had already been universally replaced by the words “Google it”, a phrase that had become so ubiquitous that anything else sounded odd.

A screenshot from Gossip Girl, via ildarabbit.wordpress.com

Like Hoover and Tupperware before it, Google’s brand name has now become a generic term.

Yet only recently have concerns about Google’s pervasiveness received mainstream attention. Last month, The Observer ran a story about Google’s auto-fill pulling up the suggested question of “Are Jews evil?” and giving hate speech prominence in the first page of search results. Within a day, Google had altered the autocomplete results.

Though the company’s response may seem promising, it is important to remember that Google isn’t just a search engine (Google’s parent company, Alphabet, has too many subdivisions to mention). Google AdSense is an online advertising service that allows many websites to profit from hosting advertisements on its pages, including the New Statesman itself. Yesterday, Drew Curtis, the founder of the internet news aggregator Fark.com, shared a story about his experiences with the service.

Under the headline “Google farked us over”, Curtis wrote:

“This past October we suffered a huge financial hit because Google mistakenly identified an image that was posted in our comments section over half a decade ago as an underage adult image – which is a felony by the way. Our ads were turned off for almost five weeks – completely and totally their mistake – and they refuse to make it right.”

The image was of a fully-clothed actress who was an adult at the time, yet Curtis claims Google flagged it because of “a small pedo bear logo” – a meme used to mock paedophiles online. More troubling than Google’s decision, however, is the difficulty that Curtis had contacting the company and resolving the issue, a process which he claims took five weeks. He wrote:

“During this five week period where our ads were shut off, every single interaction with Google Policy took between one to five days. One example: Google Policy told us they shut our ads off due to an image. Without telling us where it was. When I immediately responded and asked them where it was, the response took three more days.”

Curtis claims that other sites have had these issues but are too afraid of Google to speak out publicly. A Google spokesperson says: "We constantly review publishers for compliance with our AdSense policies and take action in the event of violations. If publishers want to appeal or learn more about actions taken with respect to their account, they can find information at the help centre here.”

Fark.com has lost revenue because of Google’s decision, according to Curtis, who sent out a plea for new subscribers to help it “get back on track”. It is easy to see how a smaller website could have been ruined in a similar scenario.


The offending image, via Fark

Google’s decision was not sinister, and it is obviously important that it tackles things that violate its policies. The lack of transparency around such decisions, and the difficulty getting in touch with Google, are troubling, however, as much of the media relies on the AdSense service to exist.

Even if Google doesn’t actively abuse this power, it is disturbing that it has the means by which to strangle any online publication, and worrying that smaller organisations can have problems getting in contact with it to solve any issues. In light of the recent news about Google's search results, the picture painted becomes more even troubling.

Update, 13/01/17:

Another Google spokesperson got in touch to provide the following statement: “We have an existing set of publisher policies that govern where Google ads may be placed in order to protect users from harmful, misleading or inappropriate content.  We enforce these policies vigorously, and taking action may include suspending ads on their site. Publishers can appeal these actions.”

Amelia Tait is a technology and digital culture writer at the New Statesman.