Samsung's 4K TV sets on show at CES. Photo: Getty Images
Show Hide image

Before we give doors and toasters sentience, we should decide what we're comfortable with first

It's becoming more and more common for everyday appliances to have features we don't expect, and the implications for privacy and freedom can be surprisingly profound. We should be sure we know what we're buying into.

Samsung has today had to try and assuage the fears of consumers who own some of its newest TV sets that they haven't accidentally bought into some kind of Orwellian dystopia. It hasn't done a particularly good job of it, but it has, at least, tried.

The source of the consternation is that many of its newest Smart TVs have a built-in microphone, so, if you lose the remote, you can call out what channel you'd like the set to change to. But the privacy policy that comes with the TV includes some worrying language: "Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of voice recognition." That's right - Samsung says it's recording everything anyone says in their own home when within earshot of their own TV, and possibly sharing it with other companies.

The statement the company released to try and calm down worried consumers is an interesting piece of text: yes, they admit, we record you, but we destroy it when we're done and we're very careful with it when we're transmitting it between ourselves and our partners who actually do the processing to understand what it is you've said. It's interesting because what it describes is so uttely banal now that what should worry us is that we're shocked or surprised.

For several years now, the phrase "Internet of Things" has become commonplace within the tech industry. It's a piece of business jargon - like "Big Data" - that obfuscates a converging series of technological trends as something simpler and less intimidating. In truth, those who worry about Big Brother are right to see Samsung's listening TV as frightening. As overused as "Orwellian" is an adjective as well, it applies almost precisely to what we're talking about here:

What is the Internet of Things, then? Put most simply, think of it like this: in the 1990s we saw the internet appear on personal computers, and then in the 2000s we saw it move onto smartphones. The Internet of Things is the next step, as web connectivity comes to a huge range of everyday objects - and ourselves - in ways that we might recognise most clearly from science fiction. Fast internet connections mean that every slab of silicon has access to a remote cloud computer cluster that can give it functions far superior to any unconnnected brick of equivalent size - that's how Siri works on iPhones. As marvellous as that smartphone may look in the hand, it's not powerful enough to process real-time voice recognition software, but Apple's huge data centres are available just a few microseconds away across the Atlantic to do the job just as seamlessly.

Another classic example to cite here is Nest's thermostat, which users can buy to replace their normal, boring one. It's clever in that it learns from what you do to it - turn down the temperature at certain times of day, and on certain days of the week, and it'll automatically build up a profile of your heating habits and adjust before you know you want to do it yourself. And it saves energy from learning not to heat empty homes! Put one of these in every home in the country and the environmental savings could be vast. What possible downside could there be?

Well, as author and digital rights activist Cory Doctorow explained to me when I interviewed him last year, imagine an Arab Spring-type situation in a country with very cold winters, universal Nest thermostat adoption and a dictator with no ualms about mass surveillance of web and mobile data communications. On the first day of a mass uprising the security services can stick fake mobile signal towers up around the public square of the capital and hoover up the unique identification addresses from the smartphones of every single protester there. (These towers exist, even here.) That night, as the temperature drops to its bitter coldest, every single protester finds their heating system remotely disabled. Hypothermia takes care of the dictator's problem.

This is not science fiction. It's entirely possible with existing technology, and only made unrealistic because that technology hasn't reached universal rates of adoption. This is the upcoming Internet of Things, if we're not careful.

Here are another couple of examples of what the Internet of Things means, if we extrapolate from what the internet is at the moment. This is a real graphic from a Sony patent application for a system designed to make advertising interactive. Ads are a pain in the ass, and ad companies know this, so there's huge money to be made if someone can create a way to make sure that every single eyeball that watches an ad is known to also be paying attention to it. And this is one of Sony's ideas. It is evil:

Or try this - a half a page aside in one of Philip K Dick's best novels, Ubik, from 1969. I happened to be reading it yesterday as the Samsung story broke, in what counts as a pretty wild coincidence:

Funny, right?

Except last year the New York Times reported that dozens of people across the United States are now waking up each day to find that their cars won't start. They've fallen behind on their monthly payments, and so dealers are able to remotely disable their vehicles as an "incentive" to fix their debts.

And this is just the first few examples, remember. It's not just about the upcoming smart fridges and smart lamps and smart clothing - it's about the fact that suddenly, for the first time ever, most of the things that we interact with every day will know all of the other things that we've interacted with that day. And the companies that make the things that keep track of us know there is an absolute tonne of money to be made if they can build huge databases to connect the dots between those devices in the course of providing an otherwise innocent-sounding service.

Nest, after all, was bought by Google - a company whose smartphone operating system, Android, is the most popular in the world because it's free, and it's free because they know the real money is in collecting user data and selling it to advertisers. The Google ecosystem encourages people to stick within the shopping mall, just as Apple's does. Amazon and Facebook want to track you as you browse from website to website because there's money in being able to build an accurate profile of what you like and love, and sell those things to you. Imagine the possibilities for #meaningfulbrandengagement in a world where every physical activity is as trackable as every digital one is now.

This is also possible because of the way the digital world has grown over the last 15 or so years. We've seen it evolve to the point where the dominant economic model is that of a service provided for free, in exchange for letting the service provider suck value out of our personal data. Apply that model to physical objects and you get a new kind of rentier capitalism, where we get the luxury of not having to get up for the TV remote in exchange for being sold stuff even more relentlessly than before; where we get a house that can text us when it's on fire, but at the expense of having to pay a rental fee for the smart lock so we can get inside to save the cat.

There are a number of different issues that come together quite neatly in the Internet of Things to make our boring stuff, in theory, more useful - but, as the examples above show, they also make them intrinsically less trustworthy, more dystopic. Here's where we are with the world and the web, today:

  • Users rarely have the ability to see what an app does with their data, and who else has access to it. The open source and free software movements have been making this point for years - if you can't open up a program and see what it does, then you can't really say you own it, or that you can really trust it not to betray you. (Similarly, those in the maker movement make the point that you can't really say you own a physical product if it actively tries to stop you modding or fixing it yourself.)
  • Companies have to build secure databases to store ever-increasing quantities of data. The more data in one place, the more tempting a target it is for those engaged in corporate espionage, identity theft, or simple vandalism. And the cost to business of compensating customers hurt by a data leak can be vast.
  • Digital services are moving from being optional to being necessary when it comes to functioning in modern society, and those services are often monopoly-like, with business models based on extracting wealth from personal data. Some services assume copyright over our personal data - from photos to blog posts - when uploaded; some services can disappear one day without warning, meaning the loss of what can be years of work and memories.
  • Mass surveillance of civilian populations is possible because of the ubiquity of smartphones and computers. This applies to your own government, and to others, and to non-governmental organisations as well.
  • "Smart" tech often isn't. The reality of Smart TVs is evident to anyone who's actually tried to use one - they're clunky, they're slow, and they barely do half of what the box claims. (Want a TV which can load catchup apps from all of the BBC, ITV, Channel 4 and Channel 5? Be prepared for a long search.) These items go out into the world with web connectivity, but also poor security that's rarely updated as often as it should. Samsung's voice recognition software isn't even that good, relative to the price paid for its use as a loss of privacy.
  • There's a movement towards what's called the "sharing economy" - instead of owning a car, for example, you rent one only on the days you need, summoned with an Uber-like app perhaps. Despite the benefits this shift may have for city congestion and air pollution (we'll only need a fraction of the current number of cars in the world we have now), a change from an ownership to a rental economy (where the companies that create and sell products retain ownership instead, importantly) is a world where individual control over consumer products is reduced even further.

Combine these trends to understand why the Internet of Things should worry all of us: it means a world where everything we do is tracked by everything we touch; where opting out is near-impossible; where the databases holding that tracking data are often vulnerable to hackers, thieves and governments; and where mundane objects like doors and cars can rebel against us if we break the terms and conditions laid down in a contract we will have had no choice but to accept. 

It's about more than just the worry that your insurance company might learn about your struggles making 5k from data shared from an over-eager wristband (though that is a plausible worry). Think of what happens if we let companies make pacemakers with Wi-Fi and then they go out of business. Who's responsible for firmware updates that would otherwise block remote hacks? And what choice does someone have if the pacemaker that saves their life turns out to be a pretty effective tracking device for an authoritarian regime? What the hell happens to that person's data when the company legally responsible for protecting it doesn't exist any more?

Over the last year, the realisation that sticking the internet into fridges might lead to a world like this has caused some to question whether the solution is for national governments to introduce regulation now, before the infrastructure is in place. January 2015 saw the Information Commissioner's Office and Ofcom team up and announce the first steps towards IoT regulation within the UK, with personal privacy for users among one of the key priorities, as the Data Protection Act 1998 is increasingly seen as inadequate. The EU is also undergoing revision of its data protection directives.

Last month also saw the publication of the US Federal Trade Commission's study into the IoT, and its conclusions were relatively moderate. There was no call for the introduction of legislation to specifically address IoT issues, but at the same time it does suggest that legislation regarding notifying users of data breaches, and regarding data protection, could use some tightening. The FTC's study came after a 2013 case where it settled a complaint with a company whose home-monitoring video camera systems had leaked live footage over the web, where others could see it.

The approach in both of these cases, however, is one of compromise, as governments don't want to make the costs of complying with new rules such that it strangles new tech businesses, or scares away existing ones. It makes citizen engagement on this issue vital - the balance that's struck on this issue will reflect many of the vested interests in the tech industry, who are taking part in the regulatory process. Resisting our listening TVs at the point of use is just one way to express discomfort with this change.

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Getty
Show Hide image

“Stinking Googles should be killed”: why 4chan is using a search engine as a racist slur

Users of the anonymous forum are targeting Google after the company introduced a programme for censoring abusive language.

Contains examples of racist language and memes.

“You were born a Google, and you are going to die a Google.”

Despite the lack of obscenity and profanity in this sentence, you have probably realised it was intended to be offensive. It is just one of hundreds of similar messages posted by the users of 4chan’s Pol board – an anonymous forum where people go to be politically incorrect. But they haven’t suddenly seen the error of their ways about using the n-word to demean their fellow human beings – instead they are trying to make the word “Google” itself become a racist slur.

In an undertaking known as “Operation Google”, some 4chan users are resisting Google’s latest artificial intelligence program, Conversation AI, by swapping smears for the names of Google products. Conversation AI aims to spot and flag offensive language online, with the eventual possibility that it could automatically delete abusive comments. The famously outspoken forum 4chan, and the similar website 8chan, didn’t like this, and began their campaign which sees them refer to “Jews” as “Skypes”, Muslims as “Skittles”, and black people as “Googles”.

If it weren’t for the utterly abhorrent racism – which includes users conflating Google’s chat tool “Hangouts” with pictures of lynched African-Americans – it would be a genius idea. The group aims to force Google to censor its own name, making its AI redundant. Yet some have acknowledged this might not ultimately work – as the AI will be able to use contextual clues to filter out when “Google” is used positively or pejoratively – and their ultimate aim is now simply to make “Google” a racist slur as revenge.


Posters from 4chan

“If you're posting anything on social media, just casually replace n****rs/blacks with googles. Act as if it's already a thing,” wrote one anonymous user. “Ignore the company, just focus on the word. Casually is the important word here – don't force it. In a month or two, Google will find themselves running a company which is effectively called ‘n****r’. And their entire brand is built on that name, so they can't just change it.”

There is no doubt that Conversation AI is questionable to anyone who values free speech. Although most people desire a nicer internet, it is hard to agree that this should be achieved by blocking out large swathes of people, and putting the power to do so in the hands of one company. Additionally, algorithms can’t yet accurately detect sarcasm and humour, so false-positives are highly likely when a bot tries to identify whether something is offensive. Indeed, Wired journalist Andy Greenberg tested Conversation AI out and discovered it gave “I shit you not” 98 out of 100 on its personal attack scale.

Yet these 4chan users have made it impossible to agree with their fight against Google by combining it with their racism. Google scores the word “moron” 99 out of 100 on its offensiveness scale. Had protestors decided to replace this – or possibly even more offensive words like “bitch” or “motherfucker” – with “Google”, pretty much everyone would be on board.

Some 4chan users are aware of this – and indeed it is important not to consider the site a unanimous entity. “You're just making yourselves look like idiots and ruining any legitimate effort to actually do this properly,” wrote one user, while some discussed their concerns that “normies” – ie. normal people – would never join in. Other 4chan users are against Operation Google as they see it as self-censorship, or simply just stupid.


Memes from 4chan

But anyone who disregards these efforts as the work of morons (or should that be Bings?) clearly does not understand the power of 4chan. The site brought down Microsoft’s AI Tay in a single day, brought the Unicode swastika (卐) to the top of Google’s trends list in 2008, hacked Sarah Palin’s email account, and leaked a large number of celebrity nudes in 2014. If the Ten Commandments were rewritten for the modern age and Moses took to Mount Sinai to wave two 16GB Tablets in the air, then the number one rule would be short and sweet: Thou shalt not mess with 4chan.

It is unclear yet how Google will respond to the attack, and whether this will ultimately affect the AI. Yet despite what ten years of Disney conditioning taught us as children, the world isn’t split into goodies and baddies. While 4chan’s methods are deplorable, their aim of questioning whether one company should have the power to censor the internet is not.

Google also hit headlines this week for its new “YouTube Heroes” program, a system that sees YouTube users rewarded with points when they flag offensive videos. It’s not hard to see how this kind of crowdsourced censorship is undesirable, particularly again as the chance for things to be incorrectly flagged is huge. A few weeks ago, popular YouTubers also hit back at censorship that saw them lose their advertising money from the site, leading #YouTubeIsOverParty to trend on Twitter. Perhaps ultimately, 4chan didn't need to go on a campaign to damage Google's name. It might already have been doing a good enough job of that itself.

Google has been contacted for comment.

Amelia Tait is a technology and digital culture writer at the New Statesman.