Reddit matures, and apologises

The site's general manager has apologised for its conduct during the Boston crisis.

Reddit's general manager , Erik Martin, has apologised for the site's role in creating and spreading misinformation related to the Boston Marathon bombings:

Though started with noble intentions, some of the activity on reddit fueled online witch hunts and dangerous speculation which spiraled into very negative consequences for innocent parties. The reddit staff and the millions of people on reddit around the world deeply regret that this happened. We have apologized privately to the family of missing college student Sunil Tripathi, as have various users and moderators. We want to take this opportunity to apologize publicly for the pain they have had to endure. We hope that this painful event will be channeled into something positive and the increased awareness will lead to Sunil's quick and safe return home. We encourage everyone to join and show your support to the Tripathi family and their search.

The apology is interesting, because it reflects how the rest of the world views Reddit far more than how the community views itself. The decentralised nature of the site means that almost everything that Martin is apologising for is actually the fault of its users, rather than the company which runs Reddit and which Martin is in charge of. The subreddit, r/findbostonbombers, was set up by, and moderated by, normal users; it was Reddit's users who posted personal information, and Reddit's users who led the witch hunts. Viewed from that angle, blaming "Reddit" for this tragedy seems like blaming "Twitter" for naming rape victims; a useful shorthand, but not something we'd expect the head of the company to apologise for.

But the Reddit community is still centralised in a way that Twitter isn't, and that has repercussions. Go to the front page of Reddit without being logged-in, and you'll see the same list of content that everyone else will - and the same that many logged-in users see, as well. Hit up Twitter, on the other hand, and the site doesn't show you a thing until you've told it who you want to follow.

In other words, Twitter is a communications medium through and through, but Reddit – while not a publication in a traditional sense – has elements that we recognise from more conventional news sites. That means the site walks a fine line between trying to enable as much freedom for its users as possible, and having to deal with their mistakes as though someone on a salary made them.

Previously, the administration has been pretty unambiguous in declaring that it is not responsible for its users actions, beyond the site's "park rules":

A small number of cases that we, the admins, reserve for stepping in and taking immediate action against posts, subreddits, and users. We don’t like to have to do it, but we’re also responsible for overseeing the park. Internally, we’ve followed the same set of guidelines for a long time, and none of these should be any surprise to anyone…

  1. Don’t spam
  2. Don’t vote cheat (it doesn’t work, anyway)
  3. Don’t post personal information
  4. Don’t post sexually suggestive content featuring minors
  5. Don’t break the site or interfere with normal usage of the site for anyone else

Those rules are not particularly restrictive, and #4 was only strengthened from the incredibly laissez-faire "no child pornography" last February. Beyond that, the admins have tended to stay silent in the face of what would seem to be noteworthy controversies, like the outing of Violentacrez by Gawker's Adrien Chen and the subsequent widespread banning of Gawker media links from the site.

So it would have been easy for Reddit to respond to this latest problem in much the same way. Blame its users, point out that it has rules to prevent the worst of it and that it is deliberately laissez-faire about the rest, and wash its hands of the whole deal.

That it hasn't is a sign of maturity from the administrative team. But it also means that there's going to be a lot more controversies which they'll be expected to have a view on in future, unless the Reddit community matures at the same time. The chances of that happening soon remain slim.

Photograph: Getty Images

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Getty
Show Hide image

“Stinking Googles should be killed”: why 4chan is using a search engine as a racist slur

Users of the anonymous forum are targeting Google after the company introduced a programme for censoring abusive language.

Contains examples of racist language and memes.

“You were born a Google, and you are going to die a Google.”

Despite the lack of obscenity and profanity in this sentence, you have probably realised it was intended to be offensive. It is just one of hundreds of similar messages posted by the users of 4chan’s Pol board – an anonymous forum where people go to be politically incorrect. But they haven’t suddenly seen the error of their ways about using the n-word to demean their fellow human beings – instead they are trying to make the word “Google” itself become a racist slur.

In an undertaking known as “Operation Google”, some 4chan users are resisting Google’s latest artificial intelligence program, Conversation AI, by swapping smears for the names of Google products. Conversation AI aims to spot and flag offensive language online, with the eventual possibility that it could automatically delete abusive comments. The famously outspoken forum 4chan, and the similar website 8chan, didn’t like this, and began their campaign which sees them refer to “Jews” as “Skypes”, Muslims as “Skittles”, and black people as “Googles”.

If it weren’t for the utterly abhorrent racism – which includes users conflating Google’s chat tool “Hangouts” with pictures of lynched African-Americans – it would be a genius idea. The group aims to force Google to censor its own name, making its AI redundant. Yet some have acknowledged this might not ultimately work – as the AI will be able to use contextual clues to filter out when “Google” is used positively or pejoratively – and their ultimate aim is now simply to make “Google” a racist slur as revenge.


Posters from 4chan

“If you're posting anything on social media, just casually replace n****rs/blacks with googles. Act as if it's already a thing,” wrote one anonymous user. “Ignore the company, just focus on the word. Casually is the important word here – don't force it. In a month or two, Google will find themselves running a company which is effectively called ‘n****r’. And their entire brand is built on that name, so they can't just change it.”

There is no doubt that Conversation AI is questionable to anyone who values free speech. Although most people desire a nicer internet, it is hard to agree that this should be achieved by blocking out large swathes of people, and putting the power to do so in the hands of one company. Additionally, algorithms can’t yet accurately detect sarcasm and humour, so false-positives are highly likely when a bot tries to identify whether something is offensive. Indeed, Wired journalist Andy Greenberg tested Conversation AI out and discovered it gave “I shit you not” 98 out of 100 on its personal attack scale.

Yet these 4chan users have made it impossible to agree with their fight against Google by combining it with their racism. Google scores the word “moron” 99 out of 100 on its offensiveness scale. Had protestors decided to replace this – or possibly even more offensive words like “bitch” or “motherfucker” – with “Google”, pretty much everyone would be on board.

Some 4chan users are aware of this – and indeed it is important not to consider the site a unanimous entity. “You're just making yourselves look like idiots and ruining any legitimate effort to actually do this properly,” wrote one user, while some discussed their concerns that “normies” – ie. normal people – would never join in. Other 4chan users are against Operation Google as they see it as self-censorship, or simply just stupid.


Memes from 4chan

But anyone who disregards these efforts as the work of morons (or should that be Bings?) clearly does not understand the power of 4chan. The site brought down Microsoft’s AI Tay in a single day, brought the Unicode swastika (卐) to the top of Google’s trends list in 2008, hacked Sarah Palin’s email account, and leaked a large number of celebrity nudes in 2014. If the Ten Commandments were rewritten for the modern age and Moses took to Mount Sinai to wave two 16GB Tablets in the air, then the number one rule would be short and sweet: Thou shalt not mess with 4chan.

It is unclear yet how Google will respond to the attack, and whether this will ultimately affect the AI. Yet despite what ten years of Disney conditioning taught us as children, the world isn’t split into goodies and baddies. While 4chan’s methods are deplorable, their aim of questioning whether one company should have the power to censor the internet is not.

Google also hit headlines this week for its new “YouTube Heroes” program, a system that sees YouTube users rewarded with points when they flag offensive videos. It’s not hard to see how this kind of crowdsourced censorship is undesirable, particularly again as the chance for things to be incorrectly flagged is huge. A few weeks ago, popular YouTubers also hit back at censorship that saw them lose their advertising money from the site, leading #YouTubeIsOverParty to trend on Twitter. Perhaps ultimately, 4chan didn't need to go on a campaign to damage Google's name. It might already have been doing a good enough job of that itself.

Google has been contacted for comment.

Amelia Tait is a technology and digital culture writer at the New Statesman.