Shinecom
Show Hide image

Amazon Echo: How 2016 tech is bringing us closer to 1984

Amazon has launched a voice-enabled virtual assistant that will be "constantly listening" inside your home. 

It’s not clever or original to compare modern technology to the dystopian surveillance devices imagined in George Orwell’s Nineteen Eighty-Four. Thankfully, the UK launch of Amazon Echo earlier today means comparison is no longer necessary. The “constantly listening” smart speaker is not alike to Orwell’s Party-monitoring telescreen, it is it. Both devices are designed to simultaneously broadcast entertainment and listen in on your conversations. Yet one is the tyrannical tool of an authoritarian dictatorship and the other is available for £149.99 – plus £4.75 postage and packaging – on Amazon today.

Amazon Echo is a masterful piece of technology. The wireless speaker uses voice recognition to obey various commands and answer your questions. Unlike Siri, Echo’s personal assistant Alexa will answer your questions in full, intelligent sentences, instead of giving you a series of links. It can play your music, read your books, tell you the weather forecast, and plan your commute. If you have other smart devices in your home, it can switch on your lights, open your garage door, and adjust your thermostat. But everyone is too busy celebrating what the device can do to spare a thought for what it should be doing.

“People don’t seem to understand that smart devices start off stupid and they only become smart by the information we give them,” says Renate Samson, the chief executive of Big Brother Watch, an organisation that protects individual privacy by exposing the surveillance state. Amazon Echo only had 13 skills in November 2014, and it now has over 3,000. Amazon readily admit the device has improved by recording, storing, and analysing data on the things its users say to Alexa. Although this data is currently only used to improve the product, David Limp, the senior vice president of devices at Amazon, tells me it may be used for targeted advertising in the future. “Hypothetical questions are hard,” he says, “But we’re not doing that today.” However, if you use the device to connect with another company – for example, Uber – then that company can also store your data and use it however it wishes.

Amazon haven’t, of course, ignored these issues. “You can’t think about privacy as an afterthought of this product,” said Limp in a two-minute long segment on privacy during the product’s hour-long launch, “it has to be built into the foundation of the product itself.” This built-in privacy is the option to mute the speaker, something Limp describes as akin to cutting the wires on the microphones. When muted, the light ring on the device goes red, and nothing you say will be streamed to the cloud. When listening to a command, the light is blue, but although the device doesn’t record anything said before the wake word “Alexa”, it is still, in Limp’s words, “constantly listening” out for its name.

“Amazon have realised that people don’t want to mute a device that’s meant to listen in,” says Samson. Although the company have a secondary privacy feature – the ability to delete everything Alexa has uploaded to the cloud via an app – the onus is once again on the individual. And, as Samson notes, "this is a device that people who aren’t fretful about their privacy will find desirable.” 

The sophisticated technology also means the device can listen to you in unprecedented ways. “Far Field” voice recognition enables it to hear you from across the room, and “beam forming” means the device singles out which of its seven microphones is pointed towards you, amplifies the sound, and supresses that of the other mics. As such, Echo can hear you even when loud music is playing. “Echo spatial perception” means that if you have lots of the devices – or the smaller, cheaper Echo Dot, which Amazon sells in six and 12 packs – only the one nearest will respond. In theory, then, the device knows which room you are in and when.

But so what if Amazon is listening? After all, you’re not having super-secret criminal meetings, and hey, targeted advertising just means you get better recommendations for egg-slicers, right?

“Lots of people think they have nothing to hide or nothing to fear but when your data is you as a human being, everyone has something to hide,” says Samson. “It’s not because it’s secretive, it’s because taken out of context it could be very misconstrued.” This is already evident in the way Google search histories are used in courts. “Individuals and groups should be able to communicate freely without it being accessed by invisible beings. This technology does exactly that. None of us really understand the broader implications of that.”

But Amazon aren’t alone. Smart devices increasingly come with cameras and microphones that can’t be disabled, and Samsung faced backlash last year after it their Smart TV listened in on conversations and shared the data with third parties. Big Brother Watch are also concerned about Mattel’s Hello Barbie, a doll which records conversations children have with it that their parents can then listen to. “We felt that that was an absolutely massive attack on a child’s ability to play,” says Samson.

All of this is to say nothing about the concern of such devices being hacked. “I would never say never,” says Limp, borrowing from Justin Bieber when asked whether this could happen. But not only could your data potentially be breached, others may potentially find a way to listen in. “Mark Zuckerberg covers the camera and mic up on his computer, so they are privacy and security concerns even for people who think you should share everything,” says Samson. “The irony of that isn’t lost on any of us.”

“Fast forward to the world where the smart home is controlled by your voice, you will see it as delightful,” said Limp as he concluded his presentation. Maybe, yes. But they’ll have to throw me in Room 101 first.

Amelia Tait is a technology and digital culture writer at the New Statesman.

Cleveland police
Show Hide image

Should Facebook face the heat for the Cleveland shooting video?

On Easter Sunday, a man now dubbed the “Facebook killer” shot and killed a grandfather before uploading footage of the murder to the social network. 

A murder suspect has committed suicide after he shot dead a grandfather seemingly at random last Sunday. Steve Stephens (pictured above), 37, was being hunted by police after he was suspected of killing Robert Godwin, 74, in Cleveland, Ohio.

The story has made international headlines not because of the murder in itself – in America, there are 12,000 gun homicides a year – but because a video of the shooting was uploaded to Facebook by the suspected killer, along with, moments later, a live-streamed confession.

After it emerged that Facebook took two hours to remove the footage of the shooting, the social network has come under fire and has promised to “do better” to make the site a “safe environment”. The site has launched a review of how it deals with violent content.

It’s hard to poke holes in Facebook’s official response – written by Justin Osofsky, its vice president of global operations – which at once acknowledges how difficult it would have been to do more, whilst simultaneously promising to do more anyway. In a timeline of events, Osofsky notes that the shooting video was not reported to Facebook until one hour and 45 minutes after it had been uploaded. A further 23 minutes after this, the suspect’s profile was disabled and the videos were no longer visible.

Despite this, the site has been condemned by many, with Reuters calling its response “bungled” and the two-hour response time prompting multiple headlines. Yet solutions are not as readily offered. Currently, the social network largely relies on its users to report offensive content, which is reviewed and removed by a team of humans – at present, artificial intelligence only generates around a third of reports that reach this team. The network is constantly working on implementing new algorithms and artificially intelligent solutions that can uphold its community standards, but at present there is simply no existing AI that can comb through Facebook’s one billion active users to immediately identify and remove a video of a murder.

The only solution, then, would be for Facebook to watch every second of every video – 100 million hours of which are watched every day on the site – before it goes live, a task daunting not only for its team, but for anyone concerned about global censorship. Of course Facebook should act as quickly as possible to remove harmful content (and of course Facebook shouldn’t call murder videos “content” in the first place) but does the site really deserve this much blame for the Cleveland killer?

To remove the blame from Facebook is not to deny that it is incredibly psychologically damaging to watch an auto-playing video of a murder. Nor should we lose sight of the fact that the act, as well as the name “Facebook killer” itself, could arguably inspire copycats. But we have to acknowledge the limits on what technology can do. Even if Facebook removed the video in three seconds, it is apparent that for thousands of users, the first impulse is to download and re-upload upsetting content rather than report it. This is evident in the fact that the victim’s grandson, Ryan, took to a different social network – Twitter – to ask people to stop sharing the video. It took nearly two hours for anyone to report the video to Facebook - it took seconds for people to download a copy for themselves and share it on.  

When we ignore these realities and beg Facebook to act, we embolden the moral crusade of surveillance. The UK government has a pattern of using tragedy to justify invasions into our privacy and security, most recently when home secretary Amber Rudd suggested that Whatsapp should remove its encryption after it emerged the Westminster attacker used the service. We cannot at once bemoan Facebook’s power in the world and simultaneously beg it to take total control. When you ask Facebook to review all of the content of all of its billions of users, you are asking for a God.

This is particularly undesirable in light of the good that shocking Facebook videos can do – however gruesome. Invaluable evidence is often provided in these clips, be they filmed by criminals themselves or their victims. When Philando Castile’s girlfriend Facebook live-streamed the aftermath of his shooting by a police officer during a traffic stop, it shed international light on police brutality in America and aided the charging of the officer in question. This clip would never have been seen if Facebook had total control of the videos uploaded to its site.  

We need to stop blaming Facebook for things it can’t yet change, when we should focus on things it can. In 2016, the site was criticised for: allowing racial discrimination via its targeted advertising; invading privacy with its facial-scanning; banning breast cancer-awareness videos; avoiding billions of dollars in tax; and tracking non-users activity across the web. Facebook should be under scrutiny for its repeated violations of its users’ privacy, not for hosting violent content – a criticism that will just give the site an excuse to violate people's privacy even further.

No one blames cars for the recent spate of vehicular terrorist attacks in Europe, and no one should blame Facebook for the Cleveland killer. Ultimately, we should accept that the social network is just a vehicle. The one to blame is the person driving.

If you have accidentally viewed upsetting and/or violent footage on social media that has affected you, call the Samaritans helpline on  116 123 or email jo@samaritans.org

Amelia Tait is a technology and digital culture writer at the New Statesman.

0800 7318496