If Facebook is serious about gender-based hate, why is it still hosting revenge porn?

Want to get back at your girlfriend for leaving you? Upload a photo she gave you in private and let strangers help you abuse her. Facebook won't do anything about it.

Facebook has a problem with women. That was clear about the time it started to take down photos that showed women’s mastectomy scars whilst leaving images that apparently showed women beaten and raped.

As problems go, it’s been a longstanding one (I wrote back in October 2011 about their housing of rape promoting groups – groups like “Riding your Girlfriend softly, Cause you don’t want to wake her up” – and refusal to do anything about it). It’s also been progressing. As last month’s outcry over misogynistic pages showed, over the past two years horrific (warning: not hyperbole) words have been joined by horrific pictures

After a targeted campaign by feminist groups, Facebook finally listened. They made a public commitment to improve their handling of gender-based hate. 

I wonder, then, why "revenge porn" pages are sitting on the site.

By unhappy accident, I stumbled upon one last week. After less than five minutes of investigation via the Facebook search tool, I’d found 22 more. (Having continued to search over the past few days, it was creepily easy to keep finding new pages.)

Pages with the declared intention to (quote) "Expose all the slags and sluts" and "Inbox pictures of your nude ex and get them back for the bad things!" Want to get back at your girlfriend for leaving you? Upload a photo she gave you in private and let strangers help you abuse her. 

It’s been known for a while that there are websites dedicated to "revenge porn". They’re about humiliation and shaming women for being sexual. And now Facebook is part of it.

On the site’s pages, there’s photo after photo of women in their underwear or holding their breasts. Some are masturbating. One I saw was a woman giving oral sex – a picture that showed her face.

Facebook’s "comment" and "like" functions allow an added layer of sleazy misogyny. With a click, users can rate what they see or write what they’d like to do to the victim. (Examples: "i would smash you in" and "there a boss pear [sic] of tits to sponk all over lool.")

Under one photo of a woman holding her breasts that showed her bedroom, users proceeded to have a conversation about how she needed to “spend less time in front of that mirror and start cleaning up that room. what [sic] shit hole.” (10 likes).  I imagine they lifted that one out of the sexist’s rulebook: while calling a woman a slag, tell her to do more housework.

Whether the victim is named varies. On some pages, there are photos of undressed women and above each – with a chilling lack of comment – is their full name. On others, the photos are anonymous and fellow Facebook users bate the poster to name and shame her.

Many of the pages have a town or city in their title, as if this is a trend with regional affiliations. Disturbingly, it also makes it easier for anyone to identify and find the victims. (The NS has decided not to give any more details, or link to any such sites, to avoid further distress to those featured.)

Holly Jacobs, Founder of End Revenge Porn, tells me that so far she’s seen limited action from Facebook in dealing with the issue. “Several people have told me that after they report pages like [these], Facebook refuses to remove them on account that they are not violating any of their terms of service,” she says. “I’d love for Facebook to eventually recognize that these are essentially promoting violence against women, but I suppose that will take some time.”

Pornography, in and of itself, clearly violates Facebook’s terms and conditions. As such, if you report a page that shows sexual acts or nudity, the explicit content means it should be taken down (though that's cold comfort to the naked victims in the meantime). But what about the revenge porn pages where women aren’t naked? Many of the victims I saw were in their bra and pants. To the cold wording of terms and conditions, an ex-boyfriend vengefully posting a photo of a woman in her underwear could be no different than a girl posting a photo of herself on holiday in a bikini. If Facebook’s point of concern is nudity rather than misogyny, what happens to the (technically covered) women currently having their image abused on the site?

Or put it another way, does a woman having her image put online to shame and humiliate only matter to Facebook if it shows her nipples or genitals?

If Facebook is serious about gender-based hate, it needs to get to grips with this: clarifying where it stands on revenge porn and dealing with what’s currently festering under its name. Or, as its users stumble across themselves exposed for other’s twisted amusement, Facebook’s problem with women is only going to get darker. 

Facebook has made a public commitment to improve their handling of gender-based hate, and yet revenge porn is depressingly easy to find on the site. Photograph: Getty Images

Frances Ryan is a journalist and political researcher. She writes regularly for the Guardian, New Statesman, and others on disability, feminism, and most areas of equality you throw at her. She has a doctorate in inequality in education. Her website is here.

Flickr: B.S.Wise/YouTube
Show Hide image

Extremist ads and LGBT videos: do we want YouTube to be a censor, or not?

Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content - or an evil, tyrannical censor for restricting access to LGBT videos?

YouTube is having a bad week. The Google-owned video-sharing platform has hit the headlines twice over complaints that it 1) is not censoring things enough, and 2) is censoring things too much.

On the one hand, big brands including Marks & Spencer, HSBC, and RBS have suspended their advertisements from the site after a Times investigation found ads from leading companies – and even the UK government – were shown alongside extremist videos. On the other, YouTubers are tweeting #YouTubeIsOverParty after it emerged that YouTube’s “restricted mode” (an opt-in setting that filters out “potentially objectionable content”) removes content with LGBT themes.

This isn’t the first time we’ve seen a social media giant be criticised for being a lax, morally irresponsible slacker and an evil, tyrannical censor and in the same week. Last month, Facebook were criticised for both failing to remove a group called “hot xxxx schoolgirls” and for removing a nude oil painting by an acclaimed artist.

That is not to say these things are equivalent. Quite obviously child abuse imagery is more troubling than a nude oil painting, and videos entitled “Jewish People Admit Organising White Genocide” are endlessly more problematic than those called “GAY flag and me petting my cat” (a highly important piece of content). I am not trying to claim that ~everything is relative~ and ~everyone deserves a voice~. Content that breaks the law must be removed and LGBT content must not. Yet these conflicting stories highlight the same underlying problem: it is a very bad idea to trust a large multibillion pound company to be the arbiter of what is or isn’t acceptable.

This isn’t because YouTube have some strange agenda where it can’t get enough of extremists and hate the LGBT community. In reality, the company’s “restricted mode” also affects Paul Joseph Watson, a controversial YouTuber whose pro-Trump conspiracy theory content includes videos titled “Islam is NOT a Religion of Peace” and “A Vote For Hillary is a Vote For World War 3”, as well as an interview entitled “Chuck Johnson: Muslim Migrants Will Cause Collapse of Europe”. The issue is that if YouTube did have this agenda, it would have complete control over what it wanted the world to see – and not only are we are willingly handing them this power, we are begging them to use it.

Moral panics are the most common justification for extreme censorship and surveillance methods. “Catching terrorists” and “stopping child abusers” are two of the greatest arguments for the dystopian surveillance measures in Theresa May’s Investigatory Powers Act and Digital Economy Bill. Yet in reality, last month the FBI let a child pornographer go free because they didn’t want to tell a court the surveillance methods they used to catch him. This begs the question: what is the surveillance really for? The same is true of censorship. When we insist that YouTube stop this and that, we are asking it to take complete control – why do we trust that this will reflect our own moral sensibilities? Why do we think it won't use this for its own benefit?

Obviously extremist content needs to be removed from YouTube, but why should YouTube be the one to do it? If a book publisher released A Very Racist Book For Racists, we wouldn’t trust them to pull it off the shelves themselves. We have laws (such as the Racial and Religious Hatred Act) that ban hate speech, and we have law enforcement bodies to impose them. On the whole, we don’t trust giant commercial companies to rule over what it is and isn’t acceptable to say, because oh, hello, yes, dystopia.

In the past, public speech was made up of hundreds of book publishers, TV stations, film-makers, and pamphleteers, and no one person or company had the power to censor everything. A book that didn’t fly at one publisher could go to another, and a documentary that the BBC didn’t like could find a home on Channel 4. Why are we happy for essentially two companies – Facebook and Google – to take this power? Why are we demanding that they use it? Why are we giving them justification to use it more, and more, and more?

In response to last week’s criticism about extremist videos on the YouTube, Google UK managing director Ronan Harris said that in 2016 Google removed nearly 2 billion ads, banned over 100,000 publishers, and prevented ads from showing on over 300 million YouTube videos. We are supposed to consider this a good thing. Why? We don't know what these adverts were for. We don't know if they were actually offensive. We don't know why they were banned. 

As it happens, YouTube has responded well to the criticism. In a statement yesterday, Google's EMEA President, Matt Brittin, apologised to advertisers and promised improvements, and in a blog this morning, Google said it is already "ramping up changes". A YouTube spokesperson also tweeted that the platform is "looking into" concerns about LGBT content being restricted. But people want more. The Guardian reported that Brittin declined three times to answer whether Google would go beyond allowing users to flag offensive material. Setting aside Brexit, wouldn't you rather it was up to us as a collective to flag offensive content and come together to make these decisions? Why is it preferable that one company takes a job that was previously trusted to the government? 

Editor’s Note, 22 March: This article has been updated to clarify Paul Joseph Watson’s YouTube content.

Amelia Tait is a technology and digital culture writer at the New Statesman.