Who are the trolls?

What we know about the men (and sometimes women) who spend their days trying to provoke a reaction on the internet.

What's the best definition of an internet troll? Here are two I like:

“A computer user who constructs the identity of sincerely wishing to be part of the group in question … but whose real intention is to cause disruption and/or trigger conflict for the purposes of their own amusement.”

--- Dr Claire Hardaker, academic researcher

The less famous of two people in a Twitter argument.                                                                                                            

--- @ropestoinfinity

Between them, they catch the complexity of the huge, sprawling phenomenon we've come to call trolling. For, as pedants will tell you, the name originally meant someone whose activities were irritating, but essentially harmless: one Guardian commenter confessed in a thread asking trolls to out themselves that he spent his time on Christian websites, calling Herbie: Fully Loaded blasphemous, because it involved a talking car. 

Now, the term is used much more broadly, to mean anyone who enrages, disrupts or threatens people over the internet. It's usually assumed that there is a simple power dynamic at work - good people get trolled by bad people. (The media loves this, because a campaign against a faceless, anonymous group that no one will admit to being a part of is the easiest campaign you'll ever run.) But it's not that easy. When a famous comedian gets mild abuse on Twitter, and retweets it to his followers, encouraging them to pile on, who's more at fault? If a person has ever said anything rude or offensive against about another person online, do they lose their right to complain about trolls?

The academic Claire Hardaker has proposed a useful taxonomy of trolls:

RIP trolls, who spend their time causing misery on memorial sites;

fame trolls, who focus all their energies on provoking celebrities;

care trolls, who purport to see abuse in every post about children or animals;

political trolls who seek to bully MPs out of office; and many others besides.

To these I would add two more: first, subcultural trolls - or "true" trolls - the ones who trawl forums full of earnest people and derail their conversations with silly questions, or hackers like "weev" who really work at being awful (he was involved with a troll collective known as the "Gay Nigger Association of America" and a hacking group called "Goatse Security"). And second, "professional trolls" or "trollumnists": writers and public figures like Samantha Brick and Katie Hopkins whose media careers are built on their willingness to "say the unsayable"; or rather, say something which will attract huge volumes of attention (albeit negative) and hits.

Although there is still relatively little research into trolling - I would recommend Hardaker's work if you are interested, along with that of US academic Whitney Phillips - we can begin to see a few patterns emerging.

Most of the high profile prosecuted cases in Britain have been of young men: 19-year-old Linford House, who burned a poppy in protest at "squadey cunts"; 25-year-old Sean Duffy, who posted offensive words and images on the Facebook sites of dead teenagers; 21-year-old Liam Stacey, who tweeted racist abuse about Fabrice Muamba while the footballer lay prone and close to death on the pitch; 17-year-old Reece Messer, who was arrested after telling Olympic diver Tom Daley "I'm going to drown you in the pool you cocky twat". Messer suffered from ADHD, and Duffy from a form of autism.

The stereotypical profile doesn't fit all abusive trolls, of course. Frank Zimmerman, who emailed Louise Mensch "You now have Sophie’s Choice: which kid is to go. One will. Count on it cunt. Have a nice day", was 60 when he was prosecuted in June 2012. (Zimmerman was an agoraphobic with mental health issues, which the judge cited when ruling that he would not face a custodial sentence.) Megan Meier committed suicide after being sent unpleasant messages by a Facebook friend called "Josh". Josh turned out to be Lori Drew, the mother of one of her friends.

Sub-cultural trolls often share a similar profile to abusive trolls: young, male and troubled. I asked Adrian Chen, the Gawker writer who has unmasked several trolls such as Reddit's Violentacrez (moderator of r/chokeabitch and r/niggerjailbait), if he had seen any common traits in the sub-cultural trolls he had encountered. He said:

These trolls are predominantly younger white men, although of course trolls of all gender/race/age exist (one of the trolls that has been popping up in my feed recently is Jamie Cochran aka "AssHurtMacFags" a trans woman from Chicago). They're bright, often self-educated. A lot seem to come from troubled backgrounds. They seem to come from the middle parts of the country [America] more than urban centers. 

There's this idea that trolls exist as Jekyll-and-Hyde characters: that they are normal people who go online and turn into monsters. But the biggest thing I've realised while reporting on trolls is that they are pretty much the same offline as online. They like to fuck with people in real life, make crude jokes, get attention. It's just that the internet makes all this much more visible to a bigger audience, and it creates a sort of feedback loop where the most intense parts of their personality are instantly rewarded with more attention, and so those aspects are honed and focused until you have the "troll" persona... I don't think you ever have a case where you show someone's real-life friends what they've been doing online and they would be completely surprised.

The issue of gender is worth raising, because although men and women are both targeted by abusive trolls, they seem to find women - particularly feminists - more fun to harass. When there are group troll attacks, male-dominated forums such as Reddit's anti-feminist threads or 4Chan's /b/ board are often implicated. The use of the spelling "raep" in several of the threats sent to Caroline Criado-Perez, and the words "rape train" suggest an organised, subcultural element, and Anita Sarkeesian reports that "Coincidentally whenever I see a noticeable uptick in hate and harassment sent my way there's almost always an angry reddit thread somewhere."

Just as there are social networks, so there are anti-social networks, where those who want to harass a given target can congregate. That has an important bearing on any idea of moderating or policing one network: it's harder to clean up Twitter when a co-ordinated attack on a tweeter can be arranged on another forum.

As for why would anyone do this? Well, anonymity is the reason that's usually given, but as Tom Postmes, a researcher at the University of Groningen, says: "It’s too simple, too straightforward, to say it turns you into an animal. In all the research online that we know of, anonymity has never had that effect of reducing self-awareness.” He suggests it might be more to do with the lack of consequences: after all, what percentage of people would steal, or lie, or drop litter, or if they knew they would not caught? 

Other researchers point to "disinhibition", where people feel less restrained and bound by social norms because they're communicating via a computer rather than face to face. Psychologist John Suller broke this down in a 2004 paper into several aspects, which Wired summarised as:

Dissociative anonymity ("my actions can't be attributed to my person"); invisibility ("nobody can tell what I look like, or judge my tone"); asynchronicity ("my actions do not occur in real-time"); solipsistic Introjection ("I can't see these people, I have to guess at who they are and their intent"); dissociative imagination ("this is not the real world, these are not real people"); and minimising authority ("there are no authority figures here, I can act freely").

Finally, US researcher Alice Marwick has a simple, if sad, answer for why online trolling exists:

"There’s the disturbing possibility that people are creating online environments purely to express the type of racist, homophobic, or sexist speech that is no longer acceptable in public society, at work, or even at home.”

If that's true, the abusive trolls are a by-product of how far we've come. Is that any comfort to their victims? I don't know. 

The "trollface" meme.

Helen Lewis is deputy editor of the New Statesman. She has presented BBC Radio 4’s Week in Westminster and is a regular panellist on BBC1’s Sunday Politics.

Cleveland police
Show Hide image

Should Facebook face the heat for the Cleveland shooting video?

On Easter Sunday, a man now dubbed the “Facebook killer” shot and killed a grandfather before uploading footage of the murder to the social network. 

A murder suspect has committed suicide after he shot dead a grandfather seemingly at random last Sunday. Steve Stephens (pictured above), 37, was being hunted by police after he was suspected of killing Robert Godwin, 74, in Cleveland, Ohio.

The story has made international headlines not because of the murder in itself – in America, there are 12,000 gun homicides a year – but because a video of the shooting was uploaded to Facebook by the suspected killer, along with, moments later, a live-streamed confession.

After it emerged that Facebook took two hours to remove the footage of the shooting, the social network has come under fire and has promised to “do better” to make the site a “safe environment”. The site has launched a review of how it deals with violent content.

It’s hard to poke holes in Facebook’s official response – written by Justin Osofsky, its vice president of global operations – which at once acknowledges how difficult it would have been to do more, whilst simultaneously promising to do more anyway. In a timeline of events, Osofsky notes that the shooting video was not reported to Facebook until one hour and 45 minutes after it had been uploaded. A further 23 minutes after this, the suspect’s profile was disabled and the videos were no longer visible.

Despite this, the site has been condemned by many, with Reuters calling its response “bungled” and the two-hour response time prompting multiple headlines. Yet solutions are not as readily offered. Currently, the social network largely relies on its users to report offensive content, which is reviewed and removed by a team of humans – at present, artificial intelligence only generates around a third of reports that reach this team. The network is constantly working on implementing new algorithms and artificially intelligent solutions that can uphold its community standards, but at present there is simply no existing AI that can comb through Facebook’s one billion active users to immediately identify and remove a video of a murder.

The only solution, then, would be for Facebook to watch every second of every video – 100 million hours of which are watched every day on the site – before it goes live, a task daunting not only for its team, but for anyone concerned about global censorship. Of course Facebook should act as quickly as possible to remove harmful content (and of course Facebook shouldn’t call murder videos “content” in the first place) but does the site really deserve this much blame for the Cleveland killer?

To remove the blame from Facebook is not to deny that it is incredibly psychologically damaging to watch an auto-playing video of a murder. Nor should we lose sight of the fact that the act, as well as the name “Facebook killer” itself, could arguably inspire copycats. But we have to acknowledge the limits on what technology can do. Even if Facebook removed the video in three seconds, it is apparent that for thousands of users, the first impulse is to download and re-upload upsetting content rather than report it. This is evident in the fact that the victim’s grandson, Ryan, took to a different social network – Twitter – to ask people to stop sharing the video. It took nearly two hours for anyone to report the video to Facebook - it took seconds for people to download a copy for themselves and share it on.  

When we ignore these realities and beg Facebook to act, we embolden the moral crusade of surveillance. The UK government has a pattern of using tragedy to justify invasions into our privacy and security, most recently when home secretary Amber Rudd suggested that Whatsapp should remove its encryption after it emerged the Westminster attacker used the service. We cannot at once bemoan Facebook’s power in the world and simultaneously beg it to take total control. When you ask Facebook to review all of the content of all of its billions of users, you are asking for a God.

This is particularly undesirable in light of the good that shocking Facebook videos can do – however gruesome. Invaluable evidence is often provided in these clips, be they filmed by criminals themselves or their victims. When Philando Castile’s girlfriend Facebook live-streamed the aftermath of his shooting by a police officer during a traffic stop, it shed international light on police brutality in America and aided the charging of the officer in question. This clip would never have been seen if Facebook had total control of the videos uploaded to its site.  

We need to stop blaming Facebook for things it can’t yet change, when we should focus on things it can. In 2016, the site was criticised for: allowing racial discrimination via its targeted advertising; invading privacy with its facial-scanning; banning breast cancer-awareness videos; avoiding billions of dollars in tax; and tracking non-users activity across the web. Facebook should be under scrutiny for its repeated violations of its users’ privacy, not for hosting violent content – a criticism that will just give the site an excuse to violate people's privacy even further.

No one blames cars for the recent spate of vehicular terrorist attacks in Europe, and no one should blame Facebook for the Cleveland killer. Ultimately, we should accept that the social network is just a vehicle. The one to blame is the person driving.

If you have accidentally viewed upsetting and/or violent footage on social media that has affected you, call the Samaritans helpline on  116 123 or email jo@samaritans.org

Amelia Tait is a technology and digital culture writer at the New Statesman.

0800 7318496