Nazis often tell me to kill myself – but it was Danny who made me cry. A complete stranger, Danny disagreed with something I wrote on Twitter and wanted to let me know. “Sorry you feel that way,” I replied to his abrasive comment, going on politely to argue my point. “Why are you sorry?” he replied. “I don’t feel bad.”
I have long assumed that people online stop abusing you when they remember there is a real human behind the screen. Danny didn’t, which upset me more than the explicit abuse I receive from neo-Nazis after I write about the alt-right. Because I had treated Danny with respect and he hadn’t responded in kind, I felt dehumanised.
But had Danny actually meant to upset me? Our exchange is emblematic of a wider online trend. More often than not, nasty comments only feel like abuse when you’re receiving them, not when you’re giving them out. It’s easy to throw out a hateful message without thinking, but the recipient will often agonise over its meaning.
In July 2016, this happened to Labour MP Thangam Debbonaire. A student tweeted to tell her to “get in the sea” – a popular online phrase that denotes distaste for something. The MP, unaware of the joke, construed it as a death threat, reporting the comment to the student’s university. And this January, the Green Party’s deputy leader, Amelia Womack, was branded a liar by thousands of men who thought she had made up a tweet in which she attributed a feminist comment to her 11-year-old nephew. The men who tweeted at Womack defended their actions (which included Photoshopping her face on to nude women) as “banter”, but she didn’t see the joke. “Branding trolling as ‘banter’ or ‘just a joke’ attempts to excuse abuse. It is irresponsible and it is time it stopped,” she says now.
The government-funded website Stop Online Abuse says that “it’s not always clear where the boundary falls between expressing a point of view and being abusive”. These kinds of comments are not something you can – or would want to – legislate against. But when a thousand people calling you a liar can feel as damaging as one “Kill yourself, bitch”, we must think critically about the internet’s propensity for cruelty. After all, cyberbullying can have catastrophic consequences.
This January, two 12-year-old American girls were charged with cyberstalking after their classmate took her own life. In 2011, mental health campaigner Carney Bonner described how abusive comments he initially brushed off as “a joke” resulted in him self-harming. But abuse doesn’t only qualify as such if it results in tragedy. A 2014 report from the Pew Research Center, a US think tank, found that 40 per cent of adults had been victims of online abuse – a statistic we should consider alongside the fact that one in four adults in the UK experiences a mental health problem each year.
“Sometimes you need hard policies – no racism, sexism, transphobia, homophobia – but the rest of the time you need the ability to empathise with those on the receiving end of a comment,” says David Kitchen, a Londoner who has moderated more than 350 online forums in the last 20 years. When he first created anti-bullying policies, he found that some users would try to “game” the rules. “A small minority of people are just bullies, and among them a smaller number still are incredibly skilled,” he says. In response, he developed zero-tolerance policies, “which sounds draconian but it isn’t… it’s liberating to have an online space be embracing and feel safe”.
It would arguably be impossible for similar policies to be implemented by social media giants such as Facebook and Twitter, because of their sheer size, but not all solutions to the problems created by technology need to be top-down. Children and adults must be educated about the real-world consequences of online acts.
“It’s a silly little thing that can probably do a lot of harm,” says a media professional who was recently “subtweeted”. This means tweeting something – usually critical or mocking – about someone without directly naming them, so that only a few people, including the target, understand to whom the comment refers. This can give bullies plausible deniability.
“That modicum of doubt is what makes it all the sadder,” says my interviewee, who wanted to remain anonymous. The subtweet – by someone he respected in his industry – made him feel worried and unsure how to react. “Because what can you do, exactly? Respond directly to the tweet and they could simply feign ignorance or make light of it… subtweet their subtweet and you essentially lower yourself to their level of digital cowardice.”
By now, your sympathy may have run out. Crying about online meanies? Why not grow up? But the same psychological phenomena that motivate us to be mean online also make it difficult to ignore this bullying.
“Unlike in an offline situation, abusers don’t have to physically face their victim’s reaction,” says Dawn Branley, a cyberpsychologist at Northumbria University. That leads to repeat offending. In turn, because the abused can’t see the other person’s facial expression or hear their tone, they often assume the worst. “Sometimes people do not intend to be mean in their comments; emotion, humour and sarcasm can easily be lost,” adds Branley. So, yes, we should try to develop thicker skins. But we should also give more thought to our online interactions. Harmful behaviours aren’t restricted to basement-dwelling trolls or cyberbullying children. The digital landscape is giving grown adults a taste for cruelty. “A joke can be just a joke, but a moderator should possess the imagination and emotion to empathise with the subject,” says Kitchen. “If it feels too harsh, then it probably is.”
This article appears in the 24 Jan 2018 issue of the New Statesman, How women took power