Who are the trolls?

What we know about the men (and sometimes women) who spend their days trying to provoke a reaction on the internet.

What's the best definition of an internet troll? Here are two I like:

“A computer user who constructs the identity of sincerely wishing to be part of the group in question … but whose real intention is to cause disruption and/or trigger conflict for the purposes of their own amusement.”

--- Dr Claire Hardaker, academic researcher

The less famous of two people in a Twitter argument.                                                                                                            

--- @ropestoinfinity

Between them, they catch the complexity of the huge, sprawling phenomenon we've come to call trolling. For, as pedants will tell you, the name originally meant someone whose activities were irritating, but essentially harmless: one Guardian commenter confessed in a thread asking trolls to out themselves that he spent his time on Christian websites, calling Herbie: Fully Loaded blasphemous, because it involved a talking car. 

Now, the term is used much more broadly, to mean anyone who enrages, disrupts or threatens people over the internet. It's usually assumed that there is a simple power dynamic at work - good people get trolled by bad people. (The media loves this, because a campaign against a faceless, anonymous group that no one will admit to being a part of is the easiest campaign you'll ever run.) But it's not that easy. When a famous comedian gets mild abuse on Twitter, and retweets it to his followers, encouraging them to pile on, who's more at fault? If a person has ever said anything rude or offensive against about another person online, do they lose their right to complain about trolls?

The academic Claire Hardaker has proposed a useful taxonomy of trolls:

RIP trolls, who spend their time causing misery on memorial sites;

fame trolls, who focus all their energies on provoking celebrities;

care trolls, who purport to see abuse in every post about children or animals;

political trolls who seek to bully MPs out of office; and many others besides.

To these I would add two more: first, subcultural trolls - or "true" trolls - the ones who trawl forums full of earnest people and derail their conversations with silly questions, or hackers like "weev" who really work at being awful (he was involved with a troll collective known as the "Gay Nigger Association of America" and a hacking group called "Goatse Security"). And second, "professional trolls" or "trollumnists": writers and public figures like Samantha Brick and Katie Hopkins whose media careers are built on their willingness to "say the unsayable"; or rather, say something which will attract huge volumes of attention (albeit negative) and hits.

Although there is still relatively little research into trolling - I would recommend Hardaker's work if you are interested, along with that of US academic Whitney Phillips - we can begin to see a few patterns emerging.

Most of the high profile prosecuted cases in Britain have been of young men: 19-year-old Linford House, who burned a poppy in protest at "squadey cunts"; 25-year-old Sean Duffy, who posted offensive words and images on the Facebook sites of dead teenagers; 21-year-old Liam Stacey, who tweeted racist abuse about Fabrice Muamba while the footballer lay prone and close to death on the pitch; 17-year-old Reece Messer, who was arrested after telling Olympic diver Tom Daley "I'm going to drown you in the pool you cocky twat". Messer suffered from ADHD, and Duffy from a form of autism.

The stereotypical profile doesn't fit all abusive trolls, of course. Frank Zimmerman, who emailed Louise Mensch "You now have Sophie’s Choice: which kid is to go. One will. Count on it cunt. Have a nice day", was 60 when he was prosecuted in June 2012. (Zimmerman was an agoraphobic with mental health issues, which the judge cited when ruling that he would not face a custodial sentence.) Megan Meier committed suicide after being sent unpleasant messages by a Facebook friend called "Josh". Josh turned out to be Lori Drew, the mother of one of her friends.

Sub-cultural trolls often share a similar profile to abusive trolls: young, male and troubled. I asked Adrian Chen, the Gawker writer who has unmasked several trolls such as Reddit's Violentacrez (moderator of r/chokeabitch and r/niggerjailbait), if he had seen any common traits in the sub-cultural trolls he had encountered. He said:

These trolls are predominantly younger white men, although of course trolls of all gender/race/age exist (one of the trolls that has been popping up in my feed recently is Jamie Cochran aka "AssHurtMacFags" a trans woman from Chicago). They're bright, often self-educated. A lot seem to come from troubled backgrounds. They seem to come from the middle parts of the country [America] more than urban centers. 

There's this idea that trolls exist as Jekyll-and-Hyde characters: that they are normal people who go online and turn into monsters. But the biggest thing I've realised while reporting on trolls is that they are pretty much the same offline as online. They like to fuck with people in real life, make crude jokes, get attention. It's just that the internet makes all this much more visible to a bigger audience, and it creates a sort of feedback loop where the most intense parts of their personality are instantly rewarded with more attention, and so those aspects are honed and focused until you have the "troll" persona... I don't think you ever have a case where you show someone's real-life friends what they've been doing online and they would be completely surprised.

The issue of gender is worth raising, because although men and women are both targeted by abusive trolls, they seem to find women - particularly feminists - more fun to harass. When there are group troll attacks, male-dominated forums such as Reddit's anti-feminist threads or 4Chan's /b/ board are often implicated. The use of the spelling "raep" in several of the threats sent to Caroline Criado-Perez, and the words "rape train" suggest an organised, subcultural element, and Anita Sarkeesian reports that "Coincidentally whenever I see a noticeable uptick in hate and harassment sent my way there's almost always an angry reddit thread somewhere."

Just as there are social networks, so there are anti-social networks, where those who want to harass a given target can congregate. That has an important bearing on any idea of moderating or policing one network: it's harder to clean up Twitter when a co-ordinated attack on a tweeter can be arranged on another forum.

As for why would anyone do this? Well, anonymity is the reason that's usually given, but as Tom Postmes, a researcher at the University of Groningen, says: "It’s too simple, too straightforward, to say it turns you into an animal. In all the research online that we know of, anonymity has never had that effect of reducing self-awareness.” He suggests it might be more to do with the lack of consequences: after all, what percentage of people would steal, or lie, or drop litter, or if they knew they would not caught? 

Other researchers point to "disinhibition", where people feel less restrained and bound by social norms because they're communicating via a computer rather than face to face. Psychologist John Suller broke this down in a 2004 paper into several aspects, which Wired summarised as:

Dissociative anonymity ("my actions can't be attributed to my person"); invisibility ("nobody can tell what I look like, or judge my tone"); asynchronicity ("my actions do not occur in real-time"); solipsistic Introjection ("I can't see these people, I have to guess at who they are and their intent"); dissociative imagination ("this is not the real world, these are not real people"); and minimising authority ("there are no authority figures here, I can act freely").

Finally, US researcher Alice Marwick has a simple, if sad, answer for why online trolling exists:

"There’s the disturbing possibility that people are creating online environments purely to express the type of racist, homophobic, or sexist speech that is no longer acceptable in public society, at work, or even at home.”

If that's true, the abusive trolls are a by-product of how far we've come. Is that any comfort to their victims? I don't know. 

The "trollface" meme.

Helen Lewis is deputy editor of the New Statesman. She has presented BBC Radio 4’s Week in Westminster and is a regular panellist on BBC1’s Sunday Politics.

Getty
Show Hide image

Fark.com’s censorship story is a striking insight into Google’s unchecked power

The founder of the community-driven website claims its advertising revenue was cut off for five weeks.

When Microsoft launched its new search engine Bing in 2009, it wasted no time in trying to get the word out. By striking a deal with the producers of the American teen drama Gossip Girl, it made a range of beautiful characters utter the words “Bing it!” in a way that fell clumsily on the audience’s ears. By the early Noughties, “search it” had already been universally replaced by the words “Google it”, a phrase that had become so ubiquitous that anything else sounded odd.

A screenshot from Gossip Girl, via ildarabbit.wordpress.com

Like Hoover and Tupperware before it, Google’s brand name has now become a generic term.

Yet only recently have concerns about Google’s pervasiveness received mainstream attention. Last month, The Observer ran a story about Google’s auto-fill pulling up the suggested question of “Are Jews evil?” and giving hate speech prominence in the first page of search results. Within a day, Google had altered the autocomplete results.

Though the company’s response may seem promising, it is important to remember that Google isn’t just a search engine (Google’s parent company, Alphabet, has too many subdivisions to mention). Google AdSense is an online advertising service that allows many websites to profit from hosting advertisements on its pages, including the New Statesman itself. Yesterday, Drew Curtis, the founder of the internet news aggregator Fark.com, shared a story about his experiences with the service.

Under the headline “Google farked us over”, Curtis wrote:

“This past October we suffered a huge financial hit because Google mistakenly identified an image that was posted in our comments section over half a decade ago as an underage adult image – which is a felony by the way. Our ads were turned off for almost five weeks – completely and totally their mistake – and they refuse to make it right.”

The image was of a fully-clothed actress who was an adult at the time, yet Curtis claims Google flagged it because of “a small pedo bear logo” – a meme used to mock paedophiles online. More troubling than Google’s decision, however, is the difficulty that Curtis had contacting the company and resolving the issue, a process which he claims took five weeks. He wrote:

“During this five week period where our ads were shut off, every single interaction with Google Policy took between one to five days. One example: Google Policy told us they shut our ads off due to an image. Without telling us where it was. When I immediately responded and asked them where it was, the response took three more days.”

Curtis claims that other sites have had these issues but are too afraid of Google to speak out publicly. A Google spokesperson says: "We constantly review publishers for compliance with our AdSense policies and take action in the event of violations. If publishers want to appeal or learn more about actions taken with respect to their account, they can find information at the help centre here.”

Fark.com has lost revenue because of Google’s decision, according to Curtis, who sent out a plea for new subscribers to help it “get back on track”. It is easy to see how a smaller website could have been ruined in a similar scenario.


The offending image, via Fark

Google’s decision was not sinister, and it is obviously important that it tackles things that violate its policies. The lack of transparency around such decisions, and the difficulty getting in touch with Google, are troubling, however, as much of the media relies on the AdSense service to exist.

Even if Google doesn’t actively abuse this power, it is disturbing that it has the means by which to strangle any online publication, and worrying that smaller organisations can have problems getting in contact with it to solve any issues. In light of the recent news about Google's search results, the picture painted becomes more even troubling.

Update, 13/01/17:

Another Google spokesperson got in touch to provide the following statement: “We have an existing set of publisher policies that govern where Google ads may be placed in order to protect users from harmful, misleading or inappropriate content.  We enforce these policies vigorously, and taking action may include suspending ads on their site. Publishers can appeal these actions.”

Amelia Tait is a technology and digital culture writer at the New Statesman.