Support 100 years of independent journalism.

  1. Science & Tech
13 July 2015

Twitter’s new porn-spotting robot moderators

The social networking site has introduced new artificial intelligence systems that can spot and delete sexual and violent images – and spare human moderators in the process. 

By Barbara Speed

Under the compelling headline “The labourers who keep dick pics and beheadings out of your Facebook feed”, journalist Adrien Chen delved last year into the little-known world of social media’s content moderators. These thousands of workers, most based in Asia, trawl through social networking sites in order to delete or flag offensive content. In the process, they are exposed to the very worst the internet has to offer – beheadings, violent pornography, images of abuse – all for wages as low as $300 a month.

But this month, Twitter has taken a first step towards automating this process, and thus sparing a huge unseen workforce from their daily bombardment of horrors. Almost exactly a year ago, Twitter bought start-up Madbits, which offers, in the words of its co-founders, a “visual intelligence technology that automatically understands, organises and extracts relevant information from raw media”. 

At the time, tech websites speculated that the Madbits would be used to develop facial recognition or tagging on Twitter photos. But in fact, the start-up’s first task was very different: it was instructed by Alex Roetter, Twitter’s head of engineering, to build a system which could find and filter out offensive images, defined by the company as “not safe for work”. 

This month, Wired reported that these artificial intelligence (AI) moderators are now up and running. Roetter claims the new moderator-bots can filter out 99 per cent of offensive imagery. They also tend to incorrectly identify about 7 per cent of acceptable images as offensive – but, the company reasons, better safe than sorry. 

Like other artificial intelligence robots, the moderator “learns” how to spot offensive imagery by analysing reams of pornography and gore, and then applies its knowledge of the content and patterns to new material. Over time, the system continues to learn, and get even better at spotting NSFW images. Soon, systems like these could replace content moderation farms altogether.

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s global affairs newsletter, every Monday and Friday. A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A weekly dig into the New Statesman’s archive of over 100 years of stellar and influential journalism, sent each Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.
I consent to New Statesman Media Group collecting my details provided via this form in accordance with the Privacy Policy

In most cases like these, it’s worth remembering those whose jobs might be lost as the robots advance, especially in developing countries – but considering the psychological damage brought on by endless exposure to violent images, we can only hope Twitter and sites like it can offer less distressing moderation jobs (and higher salaries) to these workers instead.