Reddit's science section has banned climate change-denying trolls

One of the site's largest subreddits, r/science, has had enough of angry, conspiracy-spouting posters who do nothing but ruin legitimate debate.

Reddit’s science section - r/science - is one of the site’s default sections (or “subreddits” in the site’s parlance), and is one of the main places on the internet where experts and lay people can come together and chat about science. Its moderators, like the rest of those in charge of subreddits, have to juggle the site community’s strong belief in free speech with the need to prevent arguments, trolling, or anything else that could derail genuine scientific debate.

That’s why they’ve taken the step to ban “climate change deniers” from the subreddit. One of the moderators, chemist Nathan Allen, has written a blog post to explain why the decision was made (I’ve picked out the key paragraphs):

While evolution and vaccines do have their detractors, no topic consistently evokes such rude, uninformed, and outspoken opinions as climate change. Instead of the reasoned and civil conversations that arise in most threads, when it came to climate change the comment sections became a battleground.

...

After some time interacting with the regular denier posters, it became clear that they could not or would not improve their demeanor. These problematic users were not the common “internet trolls” looking to have a little fun upsetting people. Such users are practically the norm on reddit. These people were true believers, blind to the fact that their arguments were hopelessly flawed, the result of cherry-picked data and conspiratorial thinking.

...

We discovered that the disruptive faction that bombarded climate change posts was actually substantially smaller than it had seemed. Just a small handful of people ran all of the most offensive accounts. What looked like a substantial group of objective skeptics to the outside observer was actually just a few bitter and biased posters with more opinions then [sic] evidence.

Negating the ability of this misguided group to post to the forum quickly resulted in a change in the culture within the comments. Where once there were personal insults and bitter accusations, there is now discussion of the relevant aspects of the research.

I used to work as a barman in a pub with a semi-famous regular who obsessively tried to argue that renewable energy was a scam and nuclear power was a better option, and who would pick drunken arguments with other regulars about it just for the sake of it. It was very weird, and it made uncomfortable, so we barred them. This is a bit like that.

If you want to see an example of a good discussion about climate change, then head to the comments on r/science about this blog post. There’s a lot of discussion about whether this is a genuine pro-science move, whether it’s a suppression of genuine criticism, and what kinds of tone are acceptable when posting contrary opinions.

For example, there’s a small debate over the politicisation of the word “denier”, and how some who are sceptical of climate models feel they are equated with “holocaust deniers” for daring to speak out. It’s stupid, obviously, but the point is it’s a civil debate compared to what you might see elsewhere when it comes to climate change.

The final question that Allen poses, though, is an interesting one - why don’t newspapers ban people like this too? The scientific consensus that climate change is happening, and is driven by humans, is extremely comprehensive and compelling - but media outlets like the BBC tend to offer "balance" by giving fringe sceptics an equal platform.

r/science has roughly four million monthly unique visitors, which makes it roughly twice as popular a website as the New Statesman, and an influential scientific resource. Perhaps some editors could look to reddit's science moderators for inspiration.

A screenshot of r/science, today.

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Getty
Show Hide image

A quote-by-quote analysis of how little Jeremy Hunt understands technology

Can social media giants really implement the health secretary’s sexting suggestions? 

In today’s “Did we do something wrong? No, it was social media” news, Health Secretary Jeremy Hunt has argued that technology companies need to do more to prevent sexting and cyber-bullying.

Hunt, whose job it is to help reduce the teenage suicide rate, argued that the onus for reducing the teenage suicide rate should fall on social media companies such as Facebook and Twitter.

Giving evidence to the Commons Health Committee on suicide prevention, Hunt said: “I think social media companies need to step up to the plate and show us how they can be the solution to the issue of mental ill health amongst teenagers, and not the cause of the problem.”

Pause for screaming and/or tearing out of hair.

Don’t worry though; Hunt wasn’t simply trying to pass the buck, despite the committee suggesting he direct more resources to suicide prevention, as he offered extremely well-thought out technological solutions that are in no way inferior to providing better sex education for children. Here’s a quote-by-quote analysis of just how technologically savvy Hunt is.

***

“I just ask myself the simple question as to why it is that you can’t prevent the texting of sexually explicit images by people under the age of 18…”

Here’s Hunt asking himself a question that he should be asking the actual experts, which is in no way a waste of anybody’s time at all.

“… If that’s a lock that parents choose to put on a mobile phone contract…”

A lock! But of course. But what should we lock, Jeremy? Should teenager’s phones come with a ban on all social media apps, and for good measure, a block on the use of the camera app itself? It’s hard to see how this would lead to the use of dubious applications that have significantly less security than giants such as Facebook and Snapchat. Well done.

“Because there is technology that can identify sexually explicit pictures and prevent it being transmitted.”

Erm, is there? Image recognition technology does exist, but it’s incredibly complex and expensive, and companies often rely on other information (such as URLs, tags, and hashes) to filter out and identify explicit images. In addition, social media sites like Facebook rely on their users to click the button that identifies an image as an abuse of their guidelines, and then have a human team that look through reported images. The technology is simply unable to identify individual and unique images that teenagers take of their own bodies, and the idea of a human team tackling the job is preposterous. 

But suppose the technology did exist that could flawlessly scan a picture for fleshy bits and bobs? As a tool to prevent sexting, this still is extremely flawed. What if two teens were trying to message one another Titian’s Venus for art or history class? In September, Facebook itself was forced to U-turn after removing the historical “napalm girl” photo from the site.

As for the second part of Jezza’s suggestion, if you can’t identify it, you can’t block it. Facebook Messenger already blocks you from sending pornographic links, but this again relies on analysis of the URLs rather than the content within them. Other messaging services, such as Whatsapp, offer end-to-end encryption (EE2E), meaning – most likely to Hunt’s chagrin – the messages sent on them are not stored nor easily accessed by the government.

“I ask myself why we can’t identify cyberbullying when it happens on social media platforms by word pattern recognition, and then prevent it happening.”

Jeremy, Jeremy, Jeremy, Jeremy, can’t you spot your problem yet? You’ve got to stop asking yourself!

There is simply no algorithm yet intelligent enough to identify bullying language. Why? Because we call our best mate “dickhead” and our worst enemy “pal”. Human language and meaning is infinitely complex, and scanning for certain words would almost definitely lead to false positives. As Labour MP Thangam Debbonaire famously learned this year, even humans can’t always identify whether language is offensive, so what chance does an algorithm stand?

(Side note: It is also amusing to imagine that Hunt could even begin to keep up with teenage slang in this scenario.)

Many also argue that because social media sites can remove copyrighted files efficiently, they should get better at removing abusive language. This is a flawed argument because it is easy to search for a specific file (copyright holders will often send social media giants hashed files which they can then search for on their databases) whereas (for the reasons outlined above) it is exceptionally difficult for algorithms to accurately identify the true meaning of language.

“I think there are a lot of things where social media companies could put options in their software that could reduce the risks associated with social media, and I do think that is something which they should actively pursue in a way that hasn’t happened to date.”

Leaving aside the fact that social media companies constantly come up with solutions for these problems, Hunt has left us with the burning question of whether any of this is even desirable at all.

Why should he prevent under-18s from sexting when the age of consent in the UK is 16? Where has this sudden moral panic about pornography come from? Are the government laying the ground for mass censorship? If two consenting teenagers want to send each other these aubergine emoji a couple of times a week, why should we stop them? Is it not up to parents, rather than the government, to survey and supervise their children’s online activities? Would education, with all of this in mind, not be the better option? Won't somebody please think of the children? 

“There is a lot of evidence that the technology industry, if they put their mind to it, can do really smart things.

Alas, if only we could say the same for you Mr Hunt.

Amelia Tait is a technology and digital culture writer at the New Statesman.