Who are the trolls?

What we know about the men (and sometimes women) who spend their days trying to provoke a reaction on the internet.

What's the best definition of an internet troll? Here are two I like:

“A computer user who constructs the identity of sincerely wishing to be part of the group in question … but whose real intention is to cause disruption and/or trigger conflict for the purposes of their own amusement.”

--- Dr Claire Hardaker, academic researcher

The less famous of two people in a Twitter argument.                                                                                                            

--- @ropestoinfinity

Between them, they catch the complexity of the huge, sprawling phenomenon we've come to call trolling. For, as pedants will tell you, the name originally meant someone whose activities were irritating, but essentially harmless: one Guardian commenter confessed in a thread asking trolls to out themselves that he spent his time on Christian websites, calling Herbie: Fully Loaded blasphemous, because it involved a talking car. 

Now, the term is used much more broadly, to mean anyone who enrages, disrupts or threatens people over the internet. It's usually assumed that there is a simple power dynamic at work - good people get trolled by bad people. (The media loves this, because a campaign against a faceless, anonymous group that no one will admit to being a part of is the easiest campaign you'll ever run.) But it's not that easy. When a famous comedian gets mild abuse on Twitter, and retweets it to his followers, encouraging them to pile on, who's more at fault? If a person has ever said anything rude or offensive against about another person online, do they lose their right to complain about trolls?

The academic Claire Hardaker has proposed a useful taxonomy of trolls:

RIP trolls, who spend their time causing misery on memorial sites;

fame trolls, who focus all their energies on provoking celebrities;

care trolls, who purport to see abuse in every post about children or animals;

political trolls who seek to bully MPs out of office; and many others besides.

To these I would add two more: first, subcultural trolls - or "true" trolls - the ones who trawl forums full of earnest people and derail their conversations with silly questions, or hackers like "weev" who really work at being awful (he was involved with a troll collective known as the "Gay Nigger Association of America" and a hacking group called "Goatse Security"). And second, "professional trolls" or "trollumnists": writers and public figures like Samantha Brick and Katie Hopkins whose media careers are built on their willingness to "say the unsayable"; or rather, say something which will attract huge volumes of attention (albeit negative) and hits.

Although there is still relatively little research into trolling - I would recommend Hardaker's work if you are interested, along with that of US academic Whitney Phillips - we can begin to see a few patterns emerging.

Most of the high profile prosecuted cases in Britain have been of young men: 19-year-old Linford House, who burned a poppy in protest at "squadey cunts"; 25-year-old Sean Duffy, who posted offensive words and images on the Facebook sites of dead teenagers; 21-year-old Liam Stacey, who tweeted racist abuse about Fabrice Muamba while the footballer lay prone and close to death on the pitch; 17-year-old Reece Messer, who was arrested after telling Olympic diver Tom Daley "I'm going to drown you in the pool you cocky twat". Messer suffered from ADHD, and Duffy from a form of autism.

The stereotypical profile doesn't fit all abusive trolls, of course. Frank Zimmerman, who emailed Louise Mensch "You now have Sophie’s Choice: which kid is to go. One will. Count on it cunt. Have a nice day", was 60 when he was prosecuted in June 2012. (Zimmerman was an agoraphobic with mental health issues, which the judge cited when ruling that he would not face a custodial sentence.) Megan Meier committed suicide after being sent unpleasant messages by a Facebook friend called "Josh". Josh turned out to be Lori Drew, the mother of one of her friends.

Sub-cultural trolls often share a similar profile to abusive trolls: young, male and troubled. I asked Adrian Chen, the Gawker writer who has unmasked several trolls such as Reddit's Violentacrez (moderator of r/chokeabitch and r/niggerjailbait), if he had seen any common traits in the sub-cultural trolls he had encountered. He said:

These trolls are predominantly younger white men, although of course trolls of all gender/race/age exist (one of the trolls that has been popping up in my feed recently is Jamie Cochran aka "AssHurtMacFags" a trans woman from Chicago). They're bright, often self-educated. A lot seem to come from troubled backgrounds. They seem to come from the middle parts of the country [America] more than urban centers. 

There's this idea that trolls exist as Jekyll-and-Hyde characters: that they are normal people who go online and turn into monsters. But the biggest thing I've realised while reporting on trolls is that they are pretty much the same offline as online. They like to fuck with people in real life, make crude jokes, get attention. It's just that the internet makes all this much more visible to a bigger audience, and it creates a sort of feedback loop where the most intense parts of their personality are instantly rewarded with more attention, and so those aspects are honed and focused until you have the "troll" persona... I don't think you ever have a case where you show someone's real-life friends what they've been doing online and they would be completely surprised.

The issue of gender is worth raising, because although men and women are both targeted by abusive trolls, they seem to find women - particularly feminists - more fun to harass. When there are group troll attacks, male-dominated forums such as Reddit's anti-feminist threads or 4Chan's /b/ board are often implicated. The use of the spelling "raep" in several of the threats sent to Caroline Criado-Perez, and the words "rape train" suggest an organised, subcultural element, and Anita Sarkeesian reports that "Coincidentally whenever I see a noticeable uptick in hate and harassment sent my way there's almost always an angry reddit thread somewhere."

Just as there are social networks, so there are anti-social networks, where those who want to harass a given target can congregate. That has an important bearing on any idea of moderating or policing one network: it's harder to clean up Twitter when a co-ordinated attack on a tweeter can be arranged on another forum.

As for why would anyone do this? Well, anonymity is the reason that's usually given, but as Tom Postmes, a researcher at the University of Groningen, says: "It’s too simple, too straightforward, to say it turns you into an animal. In all the research online that we know of, anonymity has never had that effect of reducing self-awareness.” He suggests it might be more to do with the lack of consequences: after all, what percentage of people would steal, or lie, or drop litter, or if they knew they would not caught? 

Other researchers point to "disinhibition", where people feel less restrained and bound by social norms because they're communicating via a computer rather than face to face. Psychologist John Suller broke this down in a 2004 paper into several aspects, which Wired summarised as:

Dissociative anonymity ("my actions can't be attributed to my person"); invisibility ("nobody can tell what I look like, or judge my tone"); asynchronicity ("my actions do not occur in real-time"); solipsistic Introjection ("I can't see these people, I have to guess at who they are and their intent"); dissociative imagination ("this is not the real world, these are not real people"); and minimising authority ("there are no authority figures here, I can act freely").

Finally, US researcher Alice Marwick has a simple, if sad, answer for why online trolling exists:

"There’s the disturbing possibility that people are creating online environments purely to express the type of racist, homophobic, or sexist speech that is no longer acceptable in public society, at work, or even at home.”

If that's true, the abusive trolls are a by-product of how far we've come. Is that any comfort to their victims? I don't know. 

The "trollface" meme.

Helen Lewis is deputy editor of the New Statesman. She has presented BBC Radio 4’s Week in Westminster and is a regular panellist on BBC1’s Sunday Politics.

MAHMOUD RASLAN
Show Hide image

“I was frozen to the spot”: the psychological dangers of autoplay video

Videos that play automatically are now ubiquitous across social media, but the format leaves many viewers vulnerable to harm and distress.

Have you ever seen a dog being skinned alive? Witnessed a child, whimpering for his mother, getting beheaded? Have you watched a man, pinned down by two police offers, being shot multiple times in the chest and back?

A few years ago, if you answered “yes” to these questions, you might have been considered deranged. Possibly, you would have been on a list somewhere, being monitored for seeking out disturbing and illicit videos online. Now, you’re more than likely just a victim of social media’s ubiquitous autoplay function.

No one likes autoplay. Back in the Nineties, homepages often came with their own jaunty background tune that would automatically play, but it didn’t take long for this annoying and invasive practice to die out. Nowadays, when you click on a website plastered with noisy adverts and clips, you immediately click off it. But although users frequently bemoan them, autoplay videos remain a huge business model for social media sites such as Twitter, Facebook, and Tumblr.

That’s fine, of course, when the autoplaying video in question is a bird’s-eye view tutorial on how to make nacho-chicken-pizza-fries (though even then, the videos might be gobbling up your data allowance without your consent). The problem arises when disturbing content is posted by users, and even media outlets, without any warnings or disclaimers.

“There are many incidents where the autoplay feature has affected me negatively,” says Sarah Turi, a 19-year-old college student from Boston, USA. Turi suffers from anxiety, and says that anything related to horror or gore can keep her awake for multiple nights on end. She has previously experienced anxiety attacks after viewing autoplaying horror movie trailers.

“Recently though, many of the videos that begin playing have to do with police brutality or terrorism or riots,” she says. “There was one incident where someone had shared a video of an execution. The video started playing, and before I could scroll away, I watched a man get beheaded by a terrorist organisation. It left me pretty shaken to say the least. I wasn't crying, but I was frozen to the spot. Even just thinking about it now leaves me feeling somewhat ill.”

Dr Dawn Branley, a health and social psychologist specialising in the risks and benefits of internet and technology use, tells me that autoplay videos on social media raise a variety of ethical concerns.

“Social media is more personal in nature compared to news channels and it is also often idly browsed with little conscious effort or concentration, and, as such, users may not be mentally prepared for the sudden appearance of a distressing video,” she says. “Suddenly witnessing a beheading, rape or graphic animal cruelty whilst scrolling through photos of your friends and family, holiday snaps, baby videos and wedding announcements may provide a real shock to the viewer.”

Branley says that, in her line of work, she has spoken to users who have experienced distress at photos of abuse and violence on social media, and speculates that autoplay video could only exacerbate this problem. She also notes that they can trigger vulnerable users, for example, people who suffer from eating disorders or PTSD.

Even those without pre-existing conditions can be negatively affected, however, as anyone who has seen disturbing footage before knows how it can pop into your head intrusively at any time and refuse to budge, remaining plastered to the edges of your skull. Even trolls are aware of this, as some tweet distressing footage at people, aware that it will autoplay.

In January 2015, Facebook responded to these issues by adding warnings to videos users flagged as graphic, meaning the footage wouldn’t autoplay and was preceded by a warning message. Viewers under 18 would be shielded from seeing violent content on their feeds. Yet just over seven months later, in August, autoplay meant thousands inadvertently watched footage of the shooting of TV journalists Alison Parker and Adam Ward.

Remember when I said no one likes autoplay? That’s not strictly true. You have three seconds to scroll away from an autoplaying video before Facebook counts it as a view. In a world where Facebook, and the users of it, are desperate to tally up as many views as possible, autoplay is considered a smart business model.

“Autoplay video originated as a way to capture viewers’ attention and prevent them from ignoring or scrolling past website content,” says Branley. “The autoplaying nature of a video is likely to capture the viewers’ attention and may potentially be harder to resist watching – compared to a static image and text.”

For those profiting, it seems not to matter that some people who can’t look away are viewers like Turi, frozen on the spot by distress.

Because of how profitable autoplay is, then, many news outlets continue to upload sensitive footage that might better be suited on their website – a consensual click away. They might add their own pre-roll warnings, but Branley notes that these are easy to miss if the video is autoplaying. If you were online yesterday, you might have seen this in action, as footage of a boy – or rather the boy – in an ambulance, distressed and bloodied, autoplayed repeatedly across social media.

News outlets have been called out for this before, and have admitted their mistakes. In August 2015, New York Times social media editor Cynthia Collins told The Media Briefing that she wishes she’d added a warning to a video of men being shot and killed at sea. After backlash from their audience, she said:

“If we could do it all again . . . there would have been a discussion about whether or not we should upload the video at all. But if we decided to upload the video I absolutely would have added a warning.”

The video ended up on the website, and viewers had to click through a handful of warnings before watching it. News footage has always had the potential to alarm and distress, but at least in the past viewers had a choice about whether they watched it. Although many news outlets have guidelines on graphic content (such as, for example, the famous breakfast test), these haven’t always been updated for social media.

It’s important that users are made aware of potential solutions to this problem,” says Branley, noting that Facebook and Twitter include options in their settings to turn off autoplay, and your browser or phone may also allow you to disable all autoplay. “However, that does not detract from the moral obligation that internet platforms should consider when introducing autoplay.

“I would suggest that an ‘opt-in’ approach (where users are required to switch on the autoplay setting if they wish to enable this function) would be much more appropriate than the current ‘opt-out’ approach, which requires users to find the settings to switch off autoplay if they do not wish to use it.”  

This seems like the simplest and fairest answer. It’s hard to argue that distressing videos shouldn’t be posted on Facebook – last month, the footage of Philando Castile’s shooting dramatically shed light on police brutality – but it seems only natural that viewers should have a choice about what they watch.

“It is possible that autoplay videos could be used to raise awareness of sensitive topics and/or to grab users' attention for positive reasons like charity campaigns,” says Branley. “However, it is a fine line between raising awareness and distressing the viewer and what one viewer finds acceptable, another may find highly distressing. Therefore, care and consideration is required.”

Right now, both care and consideration are lacking. In its current iteration, autoplay is like Anthony Burgess’ Ludovico technique – pinning our eyes open and forcing us to watch violence and horror without our consent. There are things I know I never want to watch – the curb stomp in American History X, an Armenian weightlifter dislocating their elbow during the Olympics – that could be sprung upon me at any time. Why? Because someone, somewhere, profits.

“I don't think autoplay is necessary in Facebook,” says Turi. “I think that people should decide for themselves whether or not they want to watch something. And yes, I think that it should be banned.”

Amelia Tait is a technology and digital culture writer at the New Statesman.