Reddit blocks Gawker in defence of its right to be really, really creepy

Links from Gawker are banned from /r/politics, after journalist threatens to reveal the identity of the man running Reddit's "creepshots", "beatingwomen" and "jailbait" forums.

Links from the Gawker network of sites have been banned from the Reddit US Politics sub-forum, r/politics. The ban was instigated by a moderator after a Gawker.com journalist, Adrian Chen, apparently threatened to expose the real-life identity of redditor violentacrez, the creator of r/jailbait and r/creepshots. These two sub-forums, or "subreddits" were dedicated to, respectively, sexualised pictures of under-18s and sexualised pictures of women – frequently also under-age – taken in public without their knowledge or consent.

Both subreddits have since been deleted. The first went in a cull of similarly paedophilic subreddits in August last year, which also took down r/teen_girls and r/jailbaitgw ("gone wild", as in "girls gone wild"). The second was made private and then deleted due to the fallout from Chen's investigation.

According to leaked chatlogs, Chen was planning to reveal the real name of violentacrez, and approached him – because come on, it's a he – for comment. That sparked panic behind the scenes, and eventually prompted violentacrez to delete his account.

Reddit's attitude to free speech is a complex one. The extreme laissez-fair attitude of reddit's owners and administrators (the site is owned by Condé Nast, which doesn't interfere in the day-to-day management, and similarly the site administrators typically refuse to police any sub-forums) means that replacements for r/creepshots will likely spring up again, albeit more underground. Indeed, r/creepyshots was started then closed within a day. The ability of any redditor to create any subreddit they want, without the site's administration getting involved, is fiercely protected by the community, and that has led to subreddits focused on topics ranging from marijuana use and My-Little-Pony-themed pornography to beating women (also moderated by violentacrez) and, until yesterday, creepshots.

The moderators of the r/politics subreddit apparently consider Chen's attempt to find out more about violentacrez – a practice known as doxxing – to be in violation of this covenant. They write:

As moderators, we feel that this type of behavior is completely intolerable. We volunteer our time on Reddit to make it a better place for the users, and should not be harassed and threatened for that. We should all be afraid of the threat of having our personal information investigated and spread around the internet if someone disagrees with you. Reddit prides itself on having a subreddit for everything, and no matter how much anyone may disapprove of what another user subscribes to, that is never a reason to threaten them. [emphasis original]

It is important to note that the action is taken only by the moderators of r/politics, and not reddit as a whole. Nonetheless, r/politics is an extremely busy subreddit, one of the defaults to which all new redditors are subscribed, and has almost two million subscribed readers, and likely an order of magnitude more who read without subscribing. Of the last 23 gawker.com links posted to reddit, five went to r/politics.

The whole affair has an extra level of irony, because in hoping to post online publicly available information against violentacrez wishes, Chen was doing exactly the same thing which violentacrez and other moderators of r/creepshots claimed was legal and ethical. By requiring that all photos be taken in a public area – and, after a public outcry, banning photos taken in schools or featuring under-18-year-olds – they hoped to stay on the right side of the law. Even then, however, the rules were regularly flouted, with a de facto "don't ask, don't tell" policy about location and age of the subjects of the photos.

Whether or not Chen publishes the violentacrez "outing", a group of anonymous sleuths tried to take the same idea further. A now-deleted tumblr, predditors, linked reddit usernames to real people. One user, for example, had the same username on reddit.com and music site last.fm, and the last.fm profile contained a link to his Facebook page. Cross-referencing comments about his age, university and hometown allowed the connection to be confirmed, and meant that the blog could put a name and a face to comments like "NIGGERS GET THE KNIFE" and submissions like "a gallery of my personal collection of shorts, thongs, and ass".

Jezebel interviewed the woman behind predditors, who argued that:

CreepShots is a gateway drug to more dangerous hobbies. Fetishizing non-consent "indicates [that CreepShots posters] don't view women as people, and most will not be satisfied with just that level of violation," she said. "I want to make sure that the people around these men know what they're doing so they can reap social, professional, or legal consequences, and possibly save women from future sexual assault. These men are dangerous."

Whether or not she's right, the site is certainly incredibly creepy, and it's hard to feel too sorry for men merely getting a taste of their own medicine. But as this debate has spilled over into the more mainstream areas of the site, Reddit risks becoming increasingly associated with defending the rights of its users to post jailbait and creepshots in the minds of the public. 

Update

Tumblr has reinstated the Predditors blog, and tells me that:

This blog was mistakenly suspended under the impression that it was revealing private, rather than publicly-available, information. We are restoring the blog.

The (anonymous) administrator of the blog itself appears to have set a password on it, however, putting a lid on how far it can go.

The front page of r/politics

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

MAHMOUD RASLAN
Show Hide image

“I was frozen to the spot”: the psychological dangers of autoplay video

Videos that play automatically are now ubiquitous across social media, but the format leaves many viewers vulnerable to harm and distress.

Have you ever seen a dog being skinned alive? Witnessed a child, whimpering for his mother, getting beheaded? Have you watched a man, pinned down by two police offers, being shot multiple times in the chest and back?

A few years ago, if you answered “yes” to these questions, you might have been considered deranged. Possibly, you would have been on a list somewhere, being monitored for seeking out disturbing and illicit videos online. Now, you’re more than likely just a victim of social media’s ubiquitous autoplay function.

No one likes autoplay. Back in the Nineties, homepages often came with their own jaunty background tune that would automatically play, but it didn’t take long for this annoying and invasive practice to die out. Nowadays, when you click on a website plastered with noisy adverts and clips, you immediately click off it. But although users frequently bemoan them, autoplay videos remain a huge business model for social media sites such as Twitter, Facebook, and Tumblr.

That’s fine, of course, when the autoplaying video in question is a bird’s-eye view tutorial on how to make nacho-chicken-pizza-fries (though even then, the videos might be gobbling up your data allowance without your consent). The problem arises when disturbing content is posted by users, and even media outlets, without any warnings or disclaimers.

“There are many incidents where the autoplay feature has affected me negatively,” says Sarah Turi, a 19-year-old college student from Boston, USA. Turi suffers from anxiety, and says that anything related to horror or gore can keep her awake for multiple nights on end. She has previously experienced anxiety attacks after viewing autoplaying horror movie trailers.

“Recently though, many of the videos that begin playing have to do with police brutality or terrorism or riots,” she says. “There was one incident where someone had shared a video of an execution. The video started playing, and before I could scroll away, I watched a man get beheaded by a terrorist organisation. It left me pretty shaken to say the least. I wasn't crying, but I was frozen to the spot. Even just thinking about it now leaves me feeling somewhat ill.”

Dr Dawn Branley, a health and social psychologist specialising in the risks and benefits of internet and technology use, tells me that autoplay videos on social media raise a variety of ethical concerns.

“Social media is more personal in nature compared to news channels and it is also often idly browsed with little conscious effort or concentration, and, as such, users may not be mentally prepared for the sudden appearance of a distressing video,” she says. “Suddenly witnessing a beheading, rape or graphic animal cruelty whilst scrolling through photos of your friends and family, holiday snaps, baby videos and wedding announcements may provide a real shock to the viewer.”

Branley says that, in her line of work, she has spoken to users who have experienced distress at photos of abuse and violence on social media, and speculates that autoplay video could only exacerbate this problem. She also notes that they can trigger vulnerable users, for example, people who suffer from eating disorders or PTSD.

Even those without pre-existing conditions can be negatively affected, however, as anyone who has seen disturbing footage before knows how it can pop into your head intrusively at any time and refuse to budge, remaining plastered to the edges of your skull. Even trolls are aware of this, as some tweet distressing footage at people, aware that it will autoplay.

In January 2015, Facebook responded to these issues by adding warnings to videos users flagged as graphic, meaning the footage wouldn’t autoplay and was preceded by a warning message. Viewers under 18 would be shielded from seeing violent content on their feeds. Yet just over seven months later, in August, autoplay meant thousands inadvertently watched footage of the shooting of TV journalists Alison Parker and Adam Ward.

Remember when I said no one likes autoplay? That’s not strictly true. You have three seconds to scroll away from an autoplaying video before Facebook counts it as a view. In a world where Facebook, and the users of it, are desperate to tally up as many views as possible, autoplay is considered a smart business model.

“Autoplay video originated as a way to capture viewers’ attention and prevent them from ignoring or scrolling past website content,” says Branley. “The autoplaying nature of a video is likely to capture the viewers’ attention and may potentially be harder to resist watching – compared to a static image and text.”

For those profiting, it seems not to matter that some people who can’t look away are viewers like Turi, frozen on the spot by distress.

Because of how profitable autoplay is, then, many news outlets continue to upload sensitive footage that might better be suited on their website – a consensual click away. They might add their own pre-roll warnings, but Branley notes that these are easy to miss if the video is autoplaying. If you were online yesterday, you might have seen this in action, as footage of a boy – or rather the boy – in an ambulance, distressed and bloodied, autoplayed repeatedly across social media.

News outlets have been called out for this before, and have admitted their mistakes. In August 2015, New York Times social media editor Cynthia Collins told The Media Briefing that she wishes she’d added a warning to a video of men being shot and killed at sea. After backlash from their audience, she said:

“If we could do it all again . . . there would have been a discussion about whether or not we should upload the video at all. But if we decided to upload the video I absolutely would have added a warning.”

The video ended up on the website, and viewers had to click through a handful of warnings before watching it. News footage has always had the potential to alarm and distress, but at least in the past viewers had a choice about whether they watched it. Although many news outlets have guidelines on graphic content (such as, for example, the famous breakfast test), these haven’t always been updated for social media.

It’s important that users are made aware of potential solutions to this problem,” says Branley, noting that Facebook and Twitter include options in their settings to turn off autoplay, and your browser or phone may also allow you to disable all autoplay. “However, that does not detract from the moral obligation that internet platforms should consider when introducing autoplay.

“I would suggest that an ‘opt-in’ approach (where users are required to switch on the autoplay setting if they wish to enable this function) would be much more appropriate than the current ‘opt-out’ approach, which requires users to find the settings to switch off autoplay if they do not wish to use it.”  

This seems like the simplest and fairest answer. It’s hard to argue that distressing videos shouldn’t be posted on Facebook – last month, the footage of Philando Castile’s shooting dramatically shed light on police brutality – but it seems only natural that viewers should have a choice about what they watch.

“It is possible that autoplay videos could be used to raise awareness of sensitive topics and/or to grab users' attention for positive reasons like charity campaigns,” says Branley. “However, it is a fine line between raising awareness and distressing the viewer and what one viewer finds acceptable, another may find highly distressing. Therefore, care and consideration is required.”

Right now, both care and consideration are lacking. In its current iteration, autoplay is like Anthony Burgess’ Ludovico technique – pinning our eyes open and forcing us to watch violence and horror without our consent. There are things I know I never want to watch – the curb stomp in American History X, an Armenian weightlifter dislocating their elbow during the Olympics – that could be sprung upon me at any time. Why? Because someone, somewhere, profits.

“I don't think autoplay is necessary in Facebook,” says Turi. “I think that people should decide for themselves whether or not they want to watch something. And yes, I think that it should be banned.”

Amelia Tait is a technology and digital culture writer at the New Statesman.