If everything's being automated, let's hope we'll like our robots

The robots may be taking our jobs - even making our coffee - but that doesn't mean we'll be fond of them.

How do you make the inevitable robot uprising easier to stomach? Those thinking we were guaranteed a future of flipping burgers and making coffee for each other will be disheartened to hear that coffee company Briggo has managed to solve the latter of those issues with an autonomous kiosk. Christopher Mims at Quarts explains:

Inside, protected by stainless steel walls and a thicket of patents, there is a secret, proprietary viscera of pipes, storage vessels, heating instruments, robot arms and 250 or so sensors that together do everything a human barista would do if only she had something like perfect self-knowledge. “How is my milk steamer performing? Am I a half-degree off in my brewing temperature? Is my water pressure consistent? Is there any residue buildup on my brewing chamber that might require me to switch to a backup system?”

The Briggo coffee kiosk knows how to make a perfect coffee because it was “trained” by an award-winning barista, Patrick Pierce. He's since left the company, but no matter: as in the techno-utopian Singularity, whose adherents believe that some day we will all upload our brains to computers, once a barista's essence has been captured by Briggo, his human form is just a legacy system.

That last bit will sound familiar to Star Wars fans - Patrick Pierce is Starbucks' Jango Fett, and his wood-panelled Yves Behar-designed clones are the stormtrooper clones of high street coffee. It's not just able to match us, it's able to match the absolute best of us.

It's worth reading Mims' piece in full, as he goes on to explain that Nespresso - that little coffee capsule system - has replaced the coffee machines in many of Europe's Michelin-starred restaurants. Anyone, with minimal training, can make a consistently top-class coffee using those capsule. Why bother training a barista? And, as the Brisso kiosk shows, why even bother hiring a human to put the capsule into the machine?

For those who actually enjoy human interaction at places like coffee shops, this is a sad thing. Robots aren't friends. A designer's basic job is to make things that humans can and want to use, and that's going to start meaning “making robots that we want to interact with”.

To whit, here's a video some researchers at MIT have made demonstrating their idea for a helpful, flying drone that people can call with their smartphones. It's a bit like a tour guide:

Drones, of course, have a terrible reputation, because for every one that is put to good use delivering burritos, there are ones being used to bomb people without warning in places like Pakistan and Yemen. As Dezeen tells it:

Yaniv Jacob Turgeman, research and development lead at Senseable City Lab, said SkyCall was designed to counter the sinister reputation of drones, and show they can be useful. "Our imaginations of flying sentient vehicles are filled with dystopian notions of surveillance and control, but this technology should be tasked with optimism," he told Dezeen.

That optimism comes in the form of a friendly, female - but still distinctly robotic - voice. It's like something from a computer game. Is it particularly reassuring? Not massively. It doesn't give off that trustworthy vibe you'd get from another human, or even a paper map.

Trustworthiness is a theme that's been explored in science fiction for years and years, of course, from Fritz Lang's Metropolis to Will Smith's I, Robot, so it's not surprising to see designers begin to tackle it. You also get the idea of the "uncanny valley" thrown around - if you plot a graph of "human likeness" on the x-axis of a graph and "how real it looks to people" on the y-axis, you get a steady correlation that collapses (into a "valley" shape") just before it reaches actual human likeness. That is, the objects that creep us out the most are the things that look closest to human as possible while just falling short. It's all a way of saying that creating things that look like humans, for situations where we expect humans, is tricky.

Studies that have looked at what kind of human-likeness we want in our robots have given rise to some surprising results. Akanksha Prakash from Georgia Tech carried out one such study, and its results (published earlier this month) show that, often, participants don't actually want to be helped by human-like robots. The more delicate the task - like having help in the bath - the more divisive the opinions on whether something human-like is better.

There's also a generational divide, with younger people not minding things that look like human-robot hybrids around the house, whereas older people prefer the straightforwardly human. There are clearly a lot of psychological factors at work that are going to prove a challenge to designers hoping that their product - whatever it is - becomes a hit.

Perhaps when the robots arrive they'll still have some human-like features, in the same way that some smartphones still use yellow, lined paper to give people a clue that the app they've opened is for making notes - or like wood-panelling on the side of an autonomous coffee kiosk.

You'd rather play with the one on the right, wouldn't you? (Photo: Getty)

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Getty.
Show Hide image

Forget fake news on Facebook – the real filter bubble is you

If people want to receive all their news from a single feed that reinforces their beliefs, there is little that can be done.

It’s Google that vaunts the absurdly optimistic motto “Don’t be evil”, but there are others of Silicon Valley’s techno-nabobs who have equally high-flown moral agendas. Step forward, Mark Zuckerberg of Facebook, who responded this week to the brouhaha surrounding his social media platform’s influence on the US presidential election thus: “We are all blessed to have the ability to make the world better, and we have the responsibility to do it. Let’s go work even harder.”

To which the only possible response – if you’re me – is: “No we aren’t, no we don’t, and I’m going back to my flowery bed to cultivate my garden of inanition.” I mean, where does this guy get off? It’s estimated that a single message from Facebook caused about 340,000 extra voters to pitch up at the polls for the 2010 US congressional elections – while the tech giant actually performed an “experiment”: showing either positive or negative news stories to hundreds of thousands of their members, and so rendering them happier or sadder.

In the past, Facebook employees curating the site’s “trending news” section were apparently told to squash stories that right-wingers might “like”, but in the run-up to the US election the brakes came off and all sorts of fraudulent clickbait was fed to the denizens of the virtual underworld, much – but not all of it – generated by spurious alt-right “news sites”.

Why? Because Facebook doesn’t view itself as a conventional news provider and has no rubric for fact-checking its news content: it can take up to 13 hours for stories about Hillary Clinton eating babies barbecued for her by Barack Obama to be taken down – and in that time Christ knows how many people will have not only given them credence, but also liked or shared them, so passing on the contagion. The result has been something digital analysts describe as a “filter bubble”, a sort of virtual helmet that drops down over your head and ensures that you receive only the sort of news you’re already fit to be imprinted with. Back in the days when everyone read the print edition of the New York Times this sort of manipulation was, it is argued, quite impossible; after all, the US media historically made a fetish of fact-checking, an editorial process that is pretty much unknown in our own press. Why, I’ve published short stories in American magazines and newspapers and had fact-checkers call me up to confirm the veracity of my flights of fancy. No, really.

In psychology, the process by which any given individual colludes in the creation of a personalised “filter bubble” is known as confirmation bias: we’re more inclined to believe the sort of things that validate what we want to believe – and by extension, surely, these are likely to be the sorts of beliefs we want to share with others. It seems to me that the big social media sites, while perhaps blowing up more and bigger filter bubbles, can scarcely be blamed for the confirmation bias. Nor – as yet – have they wreaked the sort of destruction on the world that has burst from the filter bubble known as “Western civilisation” – one that was blown into being by the New York Times, the BBC and all sorts of highly respected media outlets over many decades.

Societies that are both dominant and in the ascendant always imagine their belief systems and the values they enshrine are the best ones. You have only to switch on the radio and hear our politicians blithering on about how they’re going to get both bloodthirsty sides in the Syrian Civil War to behave like pacifist vegetarians in order to see the confirmation bias hard at work.

The Western belief – which has its roots in imperialism, but has bodied forth in the form of liberal humanism – that all is for the best in the world best described by the New York Times’s fact-checkers, is also a sort of filter bubble, haloing almost all of us in its shiny and translucent truth.

Religion? Obviously a good-news feed that many billions of the credulous rely on entirely. Science? Possibly the biggest filter bubble there is in the universe, and one that – if you believe Stephen Hawking – has been inflating since shortly before the Big Bang. After all, any scientific theory is just that: a series of observable (and potentially repeatable) regularities, a bubble of consistency we wander around in, perfectly at ease despite its obvious vulnerability to those little pricks, the unforeseen and the contingent. Let’s face it, what lies behind most people’s beliefs is not facts, but prejudices, and all this carping about algorithms is really the howling of a liberal elite whose own filter bubble has indeed been popped.

A television producer I know once joked that she was considering pitching a reality show to the networks to be called Daily Mail Hate Island. The conceit was that a group of ordinary Britons would be marooned on a desert island where the only news they’d have of the outside world would come in the form of the Daily Mail; viewers would find themselves riveted by watching these benighted folk descend into the barbarism of bigotry as they absorbed ever more factitious twaddle. But as I pointed out to this media innovator, we’re already marooned on Daily Mail Hate Island: it’s called Britain.

If people want to receive all their news from a single feed that constantly and consistently reinforces their beliefs, what are you going to do about it? The current argument is that Facebook’s algorithms reinforce political polarisation, but does anyone really believe better editing on the site will return our troubled present to some prelap­sarian past, let alone carry us forward into a brave new factual future? No, we’re all condemned to collude in the inflation of our own filter bubbles unless we actively seek to challenge every piece of received information, theory, or opinion. And what an exhausting business that would be . . . without the internet.

Will Self is an author and journalist. His books include Umbrella, Shark, The Book of Dave and The Butt. He writes the Madness of Crowds and Real Meals columns for the New Statesman.

This article first appeared in the 24 November 2016 issue of the New Statesman, Blair: out of exile