Twitter HQ. Photo: Kevin Krejci via Flickr.
Show Hide image

Twitter's new porn-spotting robot moderators

The social networking site has introduced new artificial intelligence systems that can spot and delete sexual and violent images – and spare human moderators in the process. 

Under the compelling headline “The labourers who keep dick pics and beheadings out of your Facebook feed”, journalist Adrien Chen delved last year into the little-known world of social media’s content moderators. These thousands of workers, most based in Asia, trawl through social networking sites in order to delete or flag offensive content. In the process, they are exposed to the very worst the internet has to offer – beheadings, violent pornography, images of abuse – all for wages as low as $300 a month.

But this month, Twitter has taken a first step towards automating this process, and thus sparing a huge unseen workforce from their daily bombardment of horrors. Almost exactly a year ago, Twitter bought start-up Madbits, which offers, in the words of its co-founders, a “visual intelligence technology that automatically understands, organises and extracts relevant information from raw media”. 

At the time, tech websites speculated that the Madbits would be used to develop facial recognition or tagging on Twitter photos. But in fact, the start-up’s first task was very different: it was instructed by Alex Roetter, Twitter’s head of engineering, to build a system which could find and filter out offensive images, defined by the company as "not safe for work". 

This month, Wired reported that these artificial intelligence (AI) moderators are now up and running. Roetter claims the new moderator-bots can filter out 99 per cent of offensive imagery. They also tend to incorrectly identify about 7 per cent of acceptable images as offensive – but, the company reasons, better safe than sorry. 

Like other artificial intelligence robots, the moderator “learns” how to spot offensive imagery by analysing reams of pornography and gore, and then applies its knowledge of the content and patterns to new material. Over time, the system continues to learn, and get even better at spotting NSFW images. Soon, systems like these could replace content moderation farms altogether.

In most cases like these, it's worth remembering those whose jobs might be lost as the robots advance, especially in developing countries – but considering the psychological damage brought on by endless exposure to violent images, we can only hope Twitter and sites like it can offer less distressing moderation jobs (and higher salaries) to these workers instead. 

Barbara Speed is comment editor at the i, and was technology and digital culture writer at the New Statesman, and a staff writer at CityMetric.

Flickr: B.S.Wise/YouTube
Show Hide image

Extremist ads and LGBT videos: do we want YouTube to be a censor, or not?

Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content - or an evil, tyrannical censor for restricting access to LGBT videos?

YouTube is having a bad week. The Google-owned video-sharing platform has hit the headlines twice over complaints that it 1) is not censoring things enough, and 2) is censoring things too much.

On the one hand, big brands including Marks & Spencer, HSBC, and RBS have suspended their advertisements from the site after a Times investigation found ads from leading companies – and even the UK government – were shown alongside extremist videos. On the other, YouTubers are tweeting #YouTubeIsOverParty after it emerged that YouTube’s “restricted mode” (an opt-in setting that filters out “potentially objectionable content”) removes content with LGBT themes.

This isn’t the first time we’ve seen a social media giant be criticised for being a lax, morally irresponsible slacker and an evil, tyrannical censor and in the same week. Last month, Facebook were criticised for both failing to remove a group called “hot xxxx schoolgirls” and for removing a nude oil painting by an acclaimed artist.

That is not to say these things are equivalent. Quite obviously child abuse imagery is more troubling than a nude oil painting, and videos entitled “Jewish People Admit Organising White Genocide” are endlessly more problematic than those called “GAY flag and me petting my cat” (a highly important piece of content). I am not trying to claim that ~everything is relative~ and ~everyone deserves a voice~. Content that breaks the law must be removed and LGBT content must not. Yet these conflicting stories highlight the same underlying problem: it is a very bad idea to trust a large multibillion pound company to be the arbiter of what is or isn’t acceptable.

This isn’t because YouTube have some strange agenda where it can’t get enough of extremists and hate the LGBT community. In reality, the company’s “restricted mode” also affects Paul Joseph Watson, a controversial YouTuber whose pro-Trump conspiracy theory content includes videos titled “Islam is NOT a Religion of Peace” and “A Vote For Hillary is a Vote For World War 3”, as well as an interview entitled “Chuck Johnson: Muslim Migrants Will Cause Collapse of Europe”. The issue is that if YouTube did have this agenda, it would have complete control over what it wanted the world to see – and not only are we are willingly handing them this power, we are begging them to use it.

Moral panics are the most common justification for extreme censorship and surveillance methods. “Catching terrorists” and “stopping child abusers” are two of the greatest arguments for the dystopian surveillance measures in Theresa May’s Investigatory Powers Act and Digital Economy Bill. Yet in reality, last month the FBI let a child pornographer go free because they didn’t want to tell a court the surveillance methods they used to catch him. This begs the question: what is the surveillance really for? The same is true of censorship. When we insist that YouTube stop this and that, we are asking it to take complete control – why do we trust that this will reflect our own moral sensibilities? Why do we think it won't use this for its own benefit?

Obviously extremist content needs to be removed from YouTube, but why should YouTube be the one to do it? If a book publisher released A Very Racist Book For Racists, we wouldn’t trust them to pull it off the shelves themselves. We have laws (such as the Racial and Religious Hatred Act) that ban hate speech, and we have law enforcement bodies to impose them. On the whole, we don’t trust giant commercial companies to rule over what it is and isn’t acceptable to say, because oh, hello, yes, dystopia.

In the past, public speech was made up of hundreds of book publishers, TV stations, film-makers, and pamphleteers, and no one person or company had the power to censor everything. A book that didn’t fly at one publisher could go to another, and a documentary that the BBC didn’t like could find a home on Channel 4. Why are we happy for essentially two companies – Facebook and Google – to take this power? Why are we demanding that they use it? Why are we giving them justification to use it more, and more, and more?

In response to last week’s criticism about extremist videos on the YouTube, Google UK managing director Ronan Harris said that in 2016 Google removed nearly 2 billion ads, banned over 100,000 publishers, and prevented ads from showing on over 300 million YouTube videos. We are supposed to consider this a good thing. Why? We don't know what these adverts were for. We don't know if they were actually offensive. We don't know why they were banned. 

As it happens, YouTube has responded well to the criticism. In a statement yesterday, Google's EMEA President, Matt Brittin, apologised to advertisers and promised improvements, and in a blog this morning, Google said it is already "ramping up changes". A YouTube spokesperson also tweeted that the platform is "looking into" concerns about LGBT content being restricted. But people want more. The Guardian reported that Brittin declined three times to answer whether Google would go beyond allowing users to flag offensive material. Setting aside Brexit, wouldn't you rather it was up to us as a collective to flag offensive content and come together to make these decisions? Why is it preferable that one company takes a job that was previously trusted to the government? 

Editor’s Note, 22 March: This article has been updated to clarify Paul Joseph Watson’s YouTube content.

Amelia Tait is a technology and digital culture writer at the New Statesman.