Life after a Twitterstorm

Tracking down a man whose arrogant email went viral, Alan White wonders: where is groupthink taking us?

This is a story about a man I never met. His name is Stuart. Except it’s not Stuart, because I’ve changed his name, of which more later. Five years ago, Stuart gained a degree of online notoriety.

It’s hard to describe exactly why, without giving away his identity. I can tell you that some emails he sent from his work account were leaked, and they went viral. In these emails, Stuart comes across as mighty rude, and mighty arrogant. This was his comeuppance. The email exchange became a page lead in the now-defunct TheLondonPaper and London Lite, and was accompanied by a load of pictures of him.

A long internet search for his name – you have to go back quite a few pages on Google – finds two references to the initial news story, and two mentions of him on forums. “If you want to tell this c*** what you think of him,” writes one poster, “Then drop him an email on XXXXX@XXXXXX.com.”

What I wanted to do was talk to Stuart, and find out what happens to those who’ve found themselves the target of online outrage – in modern parlance, a Twitterstorm. Such phenomena were far rarer when this one happened. Today, they almost seem a weekly event – from Sarah Duncan, the shouty woman in Bath who didn’t like being filmed in her car, to, most recently, Kay Burley behaving insensitively towards April Jones’ relatives. They often make me feel more uncomfortable than the original offence.

It’s not that the outrage, on an individual level, isn’t justified. It’s the way that social networking sublimates an individual’s opinion into a wider groupthink, such that you begin to wonder: at what point does the tail begin to wag the dog? What’s the long-term effect on society? Is original, individual thought one day going to come under threat from the tyranny of a majority who find it uncomfortable?

So I emailed Stuart’s company, and asked where he was. And they told me they didn’t know. So I typed his name into Facebook. Nothing. Then Twitter. Nothing.

***

 

A few weeks ago, I was given Andrew Smith’s follow-up to Moondust to review. It’s called Totally Wired and it’s about the dot com bubble, and Josh Harris, who was one of the first dot com millionaires, active in the late 1990s. Harris was a genius, and it’s Smith’s contention that he had apprehended what the Internet would do to us, ten years before anyone else.

Here’s a section from that review:

“Harris had moved on from using his projects to correctly predict what the web would look like, to what it would do to us as individuals. Or rather, to us as a collective. He developed an art project called Quiet: We Live in Public, in which 100 volunteers were placed in a huge basement, with webcams following their every move. A little later, he started another experiment, in which he and his girlfriend put webcams all around their flat, and lived under 24-hour surveillance, with thousands of viewers around the world invited to submit their feedback on what they thought of the pair's every action (via chatrooms).”

What was Harris doing with those two experiments? Why did he think it was so important to carry them out, at a time when he was at severe risk of losing all of the millions he’d amassed (and he eventually did)?

It’s because he was attempting to envisage the future. Because now, we do live under the watchful gaze of everyone else, we are likely to receive feedback from strangers – some of it stinging or insensitive. There’s a section in Smith’s book where one of Harris’s friends calls him an ‘asshole’ during an interview, seemingly for no reason. But as he points out to Smith: he’s made the author blush, and seeing him upset has made him blush, too. The Internet takes away that crucial protective shield. It creates a culture of deindividuation. It stops us questioning ourselves.

Harris saw that. The people watching him and his girlfriend seemed unable to empathize with them; made requests of a sexual nature, told them how rough they were looking. The pair began to lose their minds.

The simple action of typing someone’s name into a few search engines will yield untold amounts of information. Job, mutual friends, opinions on various things – the fact that, as Harris had it, “We live in public” is a social shift to which we’ve acclimatized remarkably quickly.

It wasn’t always thus, of course. Second Life – how ridiculous it seems now – was once held up by many as the future of our online existence. Why didn’t it catch on? One theory is that it allowed rather too much wish fulfillment.

The writer Bethlehem Shoals (ironically, a pen name) picks out this as a defining moment: “In 2004, Presidential candidate Mark Warner held a Town Hall meeting there as part of his campaign. Participation was enthusiastic but somebody in the audience with wings simply would not stay put. It was distracting and more importantly, fairly useless in its added functionality, unless a social premium was really going to be placed on the wish-fulfillment involved in having wings on a regular basis.”

No – we don’t want to live in a fantasy world. What we want to do is sell our own narrative, augment it, share it. Again and again – Spotify, Instagram, Pinterest – we are encouraged to share our tastes. What are we doing? What are we listening to? Eating? We’re under a self-imposed, dull form of surveillance – useful for business, and usually boring for our peers.

Shoals looks at another aspect of this phenomenon; calls it “personality seepage”. The chat between you and your mate about the beers you had on Friday is but a click away from the window in which you’re emailing your mother, which is but a click away from the job application you’re currently writing. You never used to be all those people at once, he points out – but now you can. Public and private, professional and casual – all of these disparate worlds are seeping into one existence.

That was what happened to Stuart: the private became public, swiftly and violently. I want to find out what it was like. But I can’t. I ask people at his old work to ask around the office. I hear three different stories. One’s that he’s emigrated to Australia. Another is that he’s become a sheep farmer in the Lake District. I begin to get a little obsessed about his fate. I go back through hundreds of pages of Google results. I look for him on the electoral roll. I start trawling through Facebook groups for mention of his name. Does Stuart even exist? It’s like the tree that fell in the forest, with no one to hear it.

***

Personality seepage – the clash between the public and the private. It’s something with which the wired society is really struggling. I write this paragraph the day that Matthew Woods, 19, has been sentenced to 12 weeks in prison for making “grossly offensive” remarks about April Jones on his Facebook page.  A stupid boy, making stupid, highly offensive comments – this is hardly a new phenomenon, but now it involves the act of publishing.

Those of us with journalistic training move rather more comfortably online than others – we know, for example, what constitutes libel, when copyright has been breached. It often seems that the likes of Matthew Woods don’t realize they’re subject to the law when they post stupid things. It isn’t to say that their posts should be excused – it’s to say that the law would be an effective deterrent, if only people understood how it works.

The Director of Public Prosecutions is currently drawing up draft guidelines for section 127 of the Communications Act 2003, which makes it an offence to send a "message or other matter that is grossly offensive or of an indecent, obscene or menacing character" using a "public electronic communications network". 

It’s going to be a complicated settlement. Joshua Rozenberg has drawn attention to some of the issues: we accord our stand-up comedians plenty of leeway with regard to sick humour – what if a vile message is meant as a joke? What if a message inadvertently ends up posted on a public forum (this has actually  happened to me once; a comment I thought was going on a friend’s Facebook wall was actually posted on a website’s comments, because my iPhone didn’t show the provenance of the feed)? And - should you face censure for retweeting something?

Prior to this case, of course, we had a kind of counter-example: Paul Chambers and his “threat” to blow up Robin Hood airport. The prosecution case included the line that he’d made a public statement. He had – but it would have been a public statement had he cracked it to a friend in the departure lounge.

And the case takes us back to my earlier point. We have placed ourselves under an extreme form of surveillance.  A bar-room joke becomes a statement of intent, viewable by anyone, anywhere. Our every error is now catalogued, easily searchable, impossible to forget.

When writing about the Twitter joke trial, Nick Cohen cited how “Adrian Smith, a Christian working for the Trafford Housing Trust, finds that opposition to gay marriage on his Facebook page allows his employers to lop £14,000 from his salary.” That’s what Adrian Smith finds when he looks for himself: he’s the man who opposes gay marriage. What about Stuart? Well, for a time, I imagine he was the man who sent the horrible work emails. But other results pertaining to people who share his name have gradually eroded away at his online existence, like waves on a cliff. Now, Stuart’s just gone. 

 

***

During the writing of this piece, I’m on holiday, cycling along the Greek coast. We stop by a quiet little beach, overlooking the Aegean, and I pull out my phone to check the time. I notice the three little lines that show my phone has picked up a wi-fi signal. No password needed. Instinctively, a well-rehearsed cue – action – reward structure lights up in the neural pathways of my brain. I check my emails.

A little bit later, I read Adam Gopnik on how the Internet has turned us inside-out – how it’s given full voice to our paranoia, our sexual obsessions, our fetishes:

“Everything once inside is outside, a click away; much that used to be outside is inside, experienced in solitude. And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.”

Consider: if you could meet the Prime Minister face-to-face, what would you say to him?

***

 

Cohen singles out the Jan Moir episode as the moment we crossed the Rubicon. Let’s remember what happened on 16th October 2009. Of Gateley, Moir wrote that "whatever the cause of death is, it is not by any yardstick a natural one".

To quote Cohen: “So many people complained to the Press Complaints Commission that its website crashed. Advertisers pulled their business from the Mail. Users put Moir`s home address on Twitter so that furious readers could - who knows? - go round to her house and beat her up. My colleague Charlie Brooker of the Guardian, who had written a justifiably scornful article about Moir, had second thoughts when he saw her address flash up on her screen. `I felt like part of a mob,’ he said.”

Such mobs are everyone’s fault, and no-one’s. Maybe you think Moir deserved it. Or maybe you agree with Cohen:  “I no more read her work than I vote for Diane Abbott, watch Jeremy Clarkson or support any of the other targets of web-generated outrage. All I am defending are the notions that freedom of speech includes the freedom to be foolish and foul and that criticism should not turn into intimidation.”

During the Olympics, I watched a similar phenomenon, after a boy had tweeted some disgusting remarks to Tom Daley, which he then shared with his followers. They shared the tweet, and soon thousands of people were responding to the boy’s initial tweets with some furious vitriol of their own. The next day, I wrote this:

“This morning a few details have come to light with regard to Tom Daley’s Twitter troll. You probably won’t have noticed them, because they only comprised a few lines in the Sun and Daily Mail. But we know Reece Messer is 17 years old, and has 10 brothers (or half-brothers). We also know his father thinks the police should have been called in, but added 'He doesn’t know what he’s saying. He has ADHD but doesn’t take his medicine.' We know his mother left him at an early age. And we also know he was very likely brought up in care.”

Wrongly, I said the correct response should have been to “walk on by”. What I should have said, was that people should take action through the appropriate channels – by reporting him to Twitter. What I did know was that thousands of people telling an obviously disturbed child that he was a “cunt” was no form of justice I recognized. But then, that’s what happens with personality disorders: the rage the sufferer is feeling is swiftly projected onto the people who apprehend them.

***

And this returns us to Josh Harris, the idea of the tyranny of the majority, and the notion that when we’re online we question ourselves less. Every Twitterstorm that I’ve seen has involved the opposite the furious rebuttal of views which those involved have long found reprehensible. And there’s less room for informed questioning of another’s views – still less one’s own – when there are RTs to be had and followers to pick up.  

I’m reminded of F Scott Fitzgerald’s response to a piece of hate mail: “Who in hell ever respected Shelley, Whitman, Poe, O. Henry, Verlaine, Swinburne, Villon, Shakespeare ect when they were alive… Just occasionally a man like Shaw who was called an immoralist 50 times worse than me back in the 90ties, lives on long enough so that the world grows up to him. What he believed in 1890 was heresy then —by now its (sic) almost respectable.”

Harris is now a passionate believer in the idea of the technological singularity: the hypothetical future emergence of greater-than-human superintelligence through technological means. The most pessimistic reading of this would say: most of us have grown up in an era when GCSE education has stressed the evil deeds of Stalin or Hitler; most of us know of Big Brother and the idea of doublethink. But what if the new dictatorships aren’t about individuals at all?

We’re not there yet. Maybe we never will be. We are, however, at a phase where political discourse is becoming increasingly less nuanced and detached from facts, and like Jan Moir or Daley’s troll, more about affirming one’s standing within a tribe.  Look, for instance, at how quickly this picture of Maria Miller MP was disseminated around Twitter. It was wrong; it was badly researched; but it was pithily phrased, and that was all that was needed.

Among the people I follow on Twitter, I see prostitutes, right wing vicars, left wing campaigners, teachers, policemen – many more, all reveling in a new-found freedom to express themselves. But just as the concept that the Internet would liberate us all is challenged by the ease with which dictators and PR firms have manipulated information on there, so the concept of free speech is challenged by an ever-increasing tribalism.

Adam Curtis’ series All Watched Over By Machines of Loving Grace featured a 1991 experiment held by a computer engineer called Loren Carpenter. Hundreds of people in a shed were given a paddle, and on a big screen was projected a game of Pong. Each half of the audience controlled one of the bats. When they turned their paddles, they contributed to the bat’s moving up or down. Despite not having been told what they were doing, within minutes, they successfully played a game, whooping with glee as they did so.

Curtis would later tell the Guardian: “Carpenter saw it as a world of freedom with order. But I suddenly saw it as the opposite – like old film of workers toiling in a factory. They weren't free – they looked like disempowered slaves locked to a giant machine screen. It was a video game, which made it fun, but it still made me wonder whether power had really gone away in these self-organising systems, or if it was just a rebranding. So we became happy components in systems – and our job is to make those systems stable."

I can’t stand reading the tweets or blogposts of, say, James Delingpole. I’m hardly the only one. But what I do know is that it’s incumbent on me, should I ever feel the need to communicate with him, to do so with civility, even if he wouldn’t afford me the same courtesy. Because if I don’t, debate becomes a mere slanging match, from which no true progress can emerge. Yet it’s so much easier for me to attack him with vitriol, and share the results with my likeminded followers as a means of validating my thoughts and actions.

Likewise: I support the notion of same-sex marriage. But if I didn’t? How readily would I share those thoughts with my few hundred followers? Would I be happy to undermine the narrative of me which I’ve sold them? For those with thousands of followers, multiply the pressure accordingly. The danger is that – in the virtual world, at least - outlying opinions gradually become the preserve of outlying individuals.

I’m conscious that what I’m attempting to articulate is one side of a double-edged sword. Of course, there’s an alternative view – the world of riot clean-ups, of the Internet as the great step forward in human development, of it being the new printing press. As others have pointed out, those who sell this narrative often ignore the fact that the printing press was quickly used by authoritarian regimes to disseminate their ideas; that this invention may ultimately have helped propagate enlightenment and tolerance, but those ideas were hard-won from history as much as from any books.

Likewise, some would argue that the group cognitive mind can only ever be a good thing. Look at Wikipedia – you either get all the information on a subject, or, where there’s disagreement, both sides of the story. But there’s a problem with this reading – the instances where one tribe is wrong, but insists otherwise. Look at the discussion pages for evolution, for example. Then there’s the notion that similar fears about fragmentation and impersonality have been expressed about all sorts of other cultural phenomena – from the growth of the city to the development of the television. In response, I have to quote Gopnik: “If it was ever thus, how did it ever get to be thus in the first place? The digital world is new, and the real gains and losses of the Internet era are to be found not in altered neurons or empathy tests but in the small changes in mood, life, manners, feelings it creates—in the texture of the age.”

***

 

Eventually, someone at Stuart’s old company put me in touch with a friend of his. I asked him to send Stuart an email, explaining who I was and what I wanted to write. I hadn’t got a reply in a couple of weeks, so I called him.

“Look Alan, Stuart says he appreciates what you’re trying to do here, but he’s just not interested in talking to you.”

“Oh. Well, ok. Could you maybe – tell me something about him? I couldn’t find a thing online.”

“I don’t really want to. He was really upset when it all blew up and I don’t think he wants to revisit it all.”

“Was all the stuff in the emails true?”

“Kind of. Look, it was a high pressure job, and he acted like a twat. But we all did. It wasn’t a nice place, full stop. And then suddenly he was just…well, you know. He got death threats and things.”

“Really?”

“Yeah, it was only a couple of times, but it shook him up. I think it made him think about his life. I mean, he totally changed career.”

“What does he do?”

“I don’t want to – it’s not an office job, I’ll tell you that.”

“Does he use a computer?”

“Not really. He has an email address, but only a few of us have it. He responds to texts.”

“Is he happy now, do you think?”

“Yes. I think that’s why he doesn’t want to talk to you.”

And that was the closest I got to Stuart. So I let him be. Like he never existed at all.

A storm. Photo: Getty

Alan White's work has appeared in the Observer, Times, Private Eye, The National and the TLS. As John Heale, he is the author of One Blood: Inside Britain's Gang Culture.

MAHMOUD RASLAN
Show Hide image

“I was frozen to the spot”: the psychological dangers of autoplay video

Videos that play automatically are now ubiquitous across social media, but the format leaves many viewers vulnerable to harm and distress.

Have you ever seen a dog being skinned alive? Witnessed a child, whimpering for his mother, getting beheaded? Have you watched a man, pinned down by two police offers, being shot multiple times in the chest and back?

A few years ago, if you answered “yes” to these questions, you might have been considered deranged. Possibly, you would have been on a list somewhere, being monitored for seeking out disturbing and illicit videos online. Now, you’re more than likely just a victim of social media’s ubiquitous autoplay function.

No one likes autoplay. Back in the Nineties, homepages often came with their own jaunty background tune that would automatically play, but it didn’t take long for this annoying and invasive practice to die out. Nowadays, when you click on a website plastered with noisy adverts and clips, you immediately click off it. But although users frequently bemoan them, autoplay videos remain a huge business model for social media sites such as Twitter, Facebook, and Tumblr.

That’s fine, of course, when the autoplaying video in question is a bird’s-eye view tutorial on how to make nacho-chicken-pizza-fries (though even then, the videos might be gobbling up your data allowance without your consent). The problem arises when disturbing content is posted by users, and even media outlets, without any warnings or disclaimers.

“There are many incidents where the autoplay feature has affected me negatively,” says Sarah Turi, a 19-year-old college student from Boston, USA. Turi suffers from anxiety, and says that anything related to horror or gore can keep her awake for multiple nights on end. She has previously experienced anxiety attacks after viewing autoplaying horror movie trailers.

“Recently though, many of the videos that begin playing have to do with police brutality or terrorism or riots,” she says. “There was one incident where someone had shared a video of an execution. The video started playing, and before I could scroll away, I watched a man get beheaded by a terrorist organisation. It left me pretty shaken to say the least. I wasn't crying, but I was frozen to the spot. Even just thinking about it now leaves me feeling somewhat ill.”

Dr Dawn Branley, a health and social psychologist specialising in the risks and benefits of internet and technology use, tells me that autoplay videos on social media raise a variety of ethical concerns.

“Social media is more personal in nature compared to news channels and it is also often idly browsed with little conscious effort or concentration, and, as such, users may not be mentally prepared for the sudden appearance of a distressing video,” she says. “Suddenly witnessing a beheading, rape or graphic animal cruelty whilst scrolling through photos of your friends and family, holiday snaps, baby videos and wedding announcements may provide a real shock to the viewer.”

Branley says that, in her line of work, she has spoken to users who have experienced distress at photos of abuse and violence on social media, and speculates that autoplay video could only exacerbate this problem. She also notes that they can trigger vulnerable users, for example, people who suffer from eating disorders or PTSD.

Even those without pre-existing conditions can be negatively affected, however, as anyone who has seen disturbing footage before knows how it can pop into your head intrusively at any time and refuse to budge, remaining plastered to the edges of your skull. Even trolls are aware of this, as some tweet distressing footage at people, aware that it will autoplay.

In January 2015, Facebook responded to these issues by adding warnings to videos users flagged as graphic, meaning the footage wouldn’t autoplay and was preceded by a warning message. Viewers under 18 would be shielded from seeing violent content on their feeds. Yet just over seven months later, in August, autoplay meant thousands inadvertently watched footage of the shooting of TV journalists Alison Parker and Adam Ward.

Remember when I said no one likes autoplay? That’s not strictly true. You have three seconds to scroll away from an autoplaying video before Facebook counts it as a view. In a world where Facebook, and the users of it, are desperate to tally up as many views as possible, autoplay is considered a smart business model.

“Autoplay video originated as a way to capture viewers’ attention and prevent them from ignoring or scrolling past website content,” says Branley. “The autoplaying nature of a video is likely to capture the viewers’ attention and may potentially be harder to resist watching – compared to a static image and text.”

For those profiting, it seems not to matter that some people who can’t look away are viewers like Turi, frozen on the spot by distress.

Because of how profitable autoplay is, then, many news outlets continue to upload sensitive footage that might better be suited on their website – a consensual click away. They might add their own pre-roll warnings, but Branley notes that these are easy to miss if the video is autoplaying. If you were online yesterday, you might have seen this in action, as footage of a boy – or rather the boy – in an ambulance, distressed and bloodied, autoplayed repeatedly across social media.

News outlets have been called out for this before, and have admitted their mistakes. In August 2015, New York Times social media editor Cynthia Collins told The Media Briefing that she wishes she’d added a warning to a video of men being shot and killed at sea. After backlash from their audience, she said:

“If we could do it all again . . . there would have been a discussion about whether or not we should upload the video at all. But if we decided to upload the video I absolutely would have added a warning.”

The video ended up on the website, and viewers had to click through a handful of warnings before watching it. News footage has always had the potential to alarm and distress, but at least in the past viewers had a choice about whether they watched it. Although many news outlets have guidelines on graphic content (such as, for example, the famous breakfast test), these haven’t always been updated for social media.

It’s important that users are made aware of potential solutions to this problem,” says Branley, noting that Facebook and Twitter include options in their settings to turn off autoplay, and your browser or phone may also allow you to disable all autoplay. “However, that does not detract from the moral obligation that internet platforms should consider when introducing autoplay.

“I would suggest that an ‘opt-in’ approach (where users are required to switch on the autoplay setting if they wish to enable this function) would be much more appropriate than the current ‘opt-out’ approach, which requires users to find the settings to switch off autoplay if they do not wish to use it.”  

This seems like the simplest and fairest answer. It’s hard to argue that distressing videos shouldn’t be posted on Facebook – last month, the footage of Philando Castile’s shooting dramatically shed light on police brutality – but it seems only natural that viewers should have a choice about what they watch.

“It is possible that autoplay videos could be used to raise awareness of sensitive topics and/or to grab users' attention for positive reasons like charity campaigns,” says Branley. “However, it is a fine line between raising awareness and distressing the viewer and what one viewer finds acceptable, another may find highly distressing. Therefore, care and consideration is required.”

Right now, both care and consideration are lacking. In its current iteration, autoplay is like Anthony Burgess’ Ludovico technique – pinning our eyes open and forcing us to watch violence and horror without our consent. There are things I know I never want to watch – the curb stomp in American History X, an Armenian weightlifter dislocating their elbow during the Olympics – that could be sprung upon me at any time. Why? Because someone, somewhere, profits.

“I don't think autoplay is necessary in Facebook,” says Turi. “I think that people should decide for themselves whether or not they want to watch something. And yes, I think that it should be banned.”

Amelia Tait is a technology and digital culture writer at the New Statesman.