Twitter's immediacy is both a boon and a curse for writers. Image: Kooroshication / Flickr / CC BY 2.0
Show Hide image

On Twitter and opinion journalism: what is an opinion worth if everyone has one?

Why should my opinions, or those of other comment journalists, be worth more than those of anyone else, especially now Twitter allows anyone to find an audience? 

These days, I spend more time talking about Twitter than on it. Now, I’m writing about it: something which I’d always considered beneath me, having long despaired of journalism springing from online arguments including the phrase “took to Twitter” to make it sound less trivial, but for anyone who covers social or political issues online, the ways in which this particular social network has changed the ways that writers and readers interact makes it impossible to ignore.

I reluctantly joined Twitter in July 2010 after I’d written a handful of blogs for the Guardian about my gender reassignment, feeling that I was shooting myself in the foot by not being on there – virtually every journalist seemed to be present, sharing pieces and opinions. I’d been hesitant about having my posts open to comments, having seen other bloggers get torn to pieces, but overall I found the experience positive, so I decided to enter this wider conversation.

One reason for my reticence was the stereotype that Twitter was just people endlessly hurling inanities into the ether, and if I looked for this then I could find it. I preferred to listen than to speak, and liked Twitter best when it pointed away from itself and towards blog posts, articles, videos and other media. I found fascinating publications and publishers, critics and readers that I may never have done otherwise, as well as cinemas, venues and galleries offering the avant-garde culture which had previously required far more effort to experience ‘live’. I made plenty of new friends, and enjoyed the connections with people I’d written about or admired, or which were strange and amusing. (A personal favourite came when I discussed favourite opening lines of songs, mentioning "The Bastard Son of Dean Friedman" by Half Man Half Biscuit. Soon, Friedman himself told us that it was a favourite of his, too.)

One reason why I decided to pursue writing, as an undergraduate in 2002, was because although I wanted to express ideas about my world and perhaps try to reshape it, I’m not inclined towards direct confrontation, and detached to a fault, and this sort of constant engagement with readers wasn’t what I’d anticipated, or desired. Once I began publishing, two years later, I found that ‘writing’ often meant finishing anything from a pitch to a play, sending it out and waiting ages for a (usually negative) reply, so I was glad to know from the rapid responses of comments and Twitter that somebody was interested – I’d craved that when I was a critic, working exclusively in print, on films that nobody watched for magazines that nobody read.

I’d never had such dialogue: I’d missed the ‘golden age’ of radical blogging, only discovering its leading lights after they started writing for mainstream media, and once Zero Books turned their old posts into publications. I’d given up other arts – music and acting – to concentrate on writing, but I missed performance and its immediate relationship with an audience: comments and Twitter brought that into the form, which I liked.

Part of this was connected to my falling into a new style of writing: opinion journalism. This had mushroomed due to widespread access to the internet, as newspapers tried to deal with the collapse of traditional funding models by commissioning content that was quicker and cheaper to produce, and to bring expanding readerships into a conversation which had opened onto Twitter as the network became more popular. I’d always wanted to write short stories, plays and poetry, but one reason that I’d struggled with these forms was that was too easy to imagine who my audience would be, and too hard to escape some sense of preaching to (tiny numbers of) the converted – a malaise that blogger-novelist Lars Iyer identified beautifully in his 2011 manifesto, "Nude in your hot tub, facing the abyss".

Iyer also posited that technology decreasing the distance between writers and readers might not be good for literary authors – although Sheila Heti’s "What Would Twitter Do" series provides some optimistic counter-examples – but journalists have no need to be so detached. Part of their art lies in finding stories, engaging with people and understanding their perspectives on the wider issues. I wasn’t sure that this was for me, but opinion journalism now seemed the most radical form, primarily because I couldn’t second-guess who such writing might reach.

I remained a critic rather than a journalist, dealing not so much with people as ideas, but as I was working in mainstream media I still needed to ‘network’ in order to give those ideas some reach beyond what I could achieve alone. Thanks to Twitter, I got years of this done in about six months, but it didn’t supplant the need to be physically present, tending rather to lead to face-to-face encounters, or casually sustain relationships which those initiated. I soon found it to be a great servant but a terrible master, becoming obsessed with reading everything in my timeline – down the pub, at football matches or on the street, I was always somewhere else. It reached the point in 2011 when my oven caught fire and my immediate reaction was to tweet about it; several laconic responses suggested I revise my priorities. (I turned the oven off, left the grill pan to burn itself out and hoped for the best. Eventually, people advised a damp cloth, but my strategy had somehow worked before that.)

*

I’d wanted to go into comment journalism because I hoped to create space for marginalised subjects, and because I didn’t like the emphatic certainty that so many established columnists displayed in their opinions. Wasn’t there room for ambivalence, for doubt, for admitting that you didn’t know everything? Not with the kind of word counts on offer – although the internet theoretically allowed for any length, the assumption was that readers, offered so much choice, wouldn’t stay beyond 900 words or so. (We’re just over that and you’re still here – right?) With outlets vying not just to be first with news but also a take on it, there was little time for historical context – vital when covering issues that had been neglected, or offering opinions that went against ‘conventional doctrines’, as Noam Chomsky discussed so acutely in Manufacturing Consent.

I struggled to position myself in this, but found it even more difficult on Twitter, the form of which took this brevity, information overload and moral certitude to a logical conclusion. Arguably, it has led mainstream media outlets to become even more reductive, running shorter pieces and more ‘listicles’, in an attempt to hold readers’ attention as the boundaries between journalism and blogging, comment sections and Twitter collapsed ever further. As Ian Crouch asked in "Bartleby and Social Media" for the New Yorker, where did ‘the introverts, the misanthropes and outcasts’, let alone those writers who don’t particularly want to be visible, fit into this?

*

Befriending other freelance journalists, who had mostly started off as bloggers, we joked about the prospect of publications asking us to “engage more” on Twitter, and how we resented doing such extra work in promoting our ‘content’ for free. In an industry that had always been precarious, we wondered where the labour in plugging individual pieces to boost their hit rates ended, and where that of building a ‘personal brand’ to increase our chances of getting further commissions began. I certainly didn’t have the energy for this ‘brand-building’: it felt grubby, looked transparent and audiences would soon tire of it. That might mean abuse, which I could handle – most of my favourite comedians were those who deliberately wound up their crowds, after all, and I’d shrugged off so much verbal hostility on the street on beginning my gender reassignment, usually with the threat of physical violence lurking just beneath, that someone called @pissflaps97 with an egg for a face insulting me from behind a keyboard posed relatively little concern.

I struggled more with the largely self-inflicted demand to excel at expressing myself in 140 characters. The unwritten rule that using Twitter only for self-promotion was bad form, coupled with the pressure to maintain a presence even though I often had nothing to say in that format meant that I became obsessed with thinking of pithy remarks and annoyed when they barely got any retweets, and frustrated when a throwaway comment about (say) football or television, generally made when I ran a bath or walked to the Tube, attracted far more interest than my tweets or blog posts on the art, literature or film I really cared about. First I got fixated on retweets and favourites of individual tweets, then on how many followers I had. Who was more popular than me? Why?

I tried to resist being led by numbers, thinking about Stewart Lee’s critique of the Conservative idea that unprofitable artists should tailor their output to the demands of the market, and how that mitigated against my original aims. I got tired of throwing links to unpopular culture at an invisible audience, but I didn’t know what else to do, feeling even more exhausted by the rapidity with which Twitter and comment journalism raced in tandem through trending topics, and guilty about keeping silent on political issues but not seeing what I’d add by expressing condolences at a tragedy or outrage at an atrocity.

I spoke more to my friend Joe Stretch, a novelist, than anyone else about Twitter. He was far less keen than me: “I’m a dreamer!” he said, “I’m not interested in building an audience one by one by being witty!” It occurred to me that the way to do that in my field was by being angry, and having endless fights – ideally with others on the left, as right-wingers presented a soft target. This might be with other journalists – there were plenty of feminists whose statements on trans issues I could have “called out”, who often hadn’t grown up with these communication tools and had been caught out by sex workers and trans people having more of a say over their mainstream media representation. Coming from the trans community, often stereotyped as disproportionately furious in an attempt to delegitimise our political concerns, and trying to change the terms of debate rather than accept unfavourable ones, this didn’t appeal.

It might otherwise be with readers who attacked me, but I’d witnessed numerous Twitter rows where somebody’s response had gone down badly and never been forgiven, with the bitter tones of these arguments tending to lead journalists to entrench their positions rather than publicly reconsider them. This wasn’t always the case, but the success rate wasn’t high, discouraging me from becoming involved, pushing me away from outwardly political issues and back towards esoteric arts coverage. In any case, I did all my writing around a day job, and if I spent all day in heated online debates then I’d get fired – as I’d learned from my bad-tempered arguments on forums, which often got me in trouble during a dismal office assignment in mid-2000s Brighton but meant that by the time I’d started in mainstream media, I knew that once your statements went online, they would almost certainly never disappear.

However frustrating its performative aspects, not least the tendency for people to use a dot in front of someone’s Twitter handle to point their entire following towards an argument, ‘call-out culture’ is by no means entirely bad: I’ve tried to acknowledge feedback when I’ve used a word or phrase without considering its connotations, feeling this more constructive than getting riled. It’s tempting to see Twitter storms as the downside of your work being received in a way that’s impossible to anticipate – the things I fear will annoy people almost never do, and what feel like my most innocuous statements often prove the most antagonising – but if you could accurately predict every response to a piece, there would be no need to write it. Anyway, such rows have become so frequent as to feel virtually meaningless – as one journalist friend said, try to explain one to someone who’s never used Twitter and see how long it takes before you feel too ridiculous to continue – and the outrage cycle so short that writers don’t have to lay low for long before the rage moves elsewhere.

However, all of this generates a climate unfavourable to consideration, historicism or exploration of unorthodox subjects, especially when combined with the advertising models that news outlets use to fund their websites, with revenue often tied to hits. The Daily Mail realised this and commissioned plenty of articles that would bring angry leftists to Mail Online – that the visitors despised them made no difference whatsoever – but they’re arguably not the only culprits. The upshot might be that those who don’t want to offend people gratuitously and try to work with sensitivity are driven out for making human errors, and who remain are those who seem to delight in displeasing people (although I’ll note here that Brendan O’Neill and Julie Burchill aren’t visible on Twitter), which won’t make for healthy discourse.

Ultimately, Twitter poses all sorts of issues around who speaks – ones that artist and writer Huw Lemmey frequently airs with great intelligence. Feeling ever less inclined towards raising my voice, I often ask myself: why should my opinions, or those of other comment journalists, be worth more than those of anyone else, especially now Twitter allows anyone to find an audience? When investigative journalism was better funded, with commissions given to skilled interviewers and researchers who were rewarded with columns when they were no longer mobile enough to chase stories, the divide between writer and reader made sense, but now it’s far more difficult to say where that divide lies, if it still exists and if it should, although plenty of bad feeling seems to exist between some who write for traditional ‘mainstream media’ and others who don’t.

For all that, people predicting the decline of Twitter might be wrong, and perhaps Twitter might prove to other social networks what Google was to the plethora of search engines vying for users in the late 1990s. The simplicity of Twitter works in its favour (no matter how much timelines are clogged with promoted tweets, videos and images) as does its ultra-democratic allowing of access to public discourse to anyone with the stomach for it.

Certainly, the London riots of 2011 proved instructive on changing perceptions of the social uses of Twitter, and on its relationship with opinion journalism, which was still able to speak truth to power as the fourth estate was meant to do. Perhaps aware that Twitter was being credited with important roles in the Arab spring and in exposing the complicity between the Murdoch press and Westminster politicians behind the phone-hacking scandal, incessant tweeter Louise Mensch suggested that it be shut down during periods of civil disorder.

Meanwhile, David Cameron did his best to discourage people from suggesting reasons for the riots, trying to cement the idea that they had occurred in this specific time and place due to no more than "mindless selfishness", and columnist Jody McIntyre was dismissed from several publications for clumsily expressing sympathy with the rioters. In the event, many of the comment pieces I read – and there were plenty – tended to confirm the already-familiar politics of the author, and I never saw the one I wanted to read, which admitted that the writer had little idea yet why they had happened, and that we might be better served by investigative work that engaged with some of the people involved.

But amidst the maddening rush of opinions, and anxieties over how much I should self-censor, the most striking thing I saw was a series of tweets by Surreal Football – one of the few in a lively football blogging circuit expressly opposed to entering mainstream sports pages, a position that clearly liberated them. Now deleted, they read something like: "Tory policy through the ages. 1981: No such thing as society. 2011: Why do these people have no qualms about smashing our stuff?" I paused, thinking about the alienation and marginalisation I felt, the pointlessness and powerlessness, and realised that this was generated by something far bigger than Twitter. I decided against writing anything on the riots or even retweeting what I’d just seen, closed my laptop and went back to the book I was reading.

This piece forms part of our Social Media Week. Click here to read the introduction, and here to see the other pieces in the series.

Juliet Jacques is a freelance journalist and writer who covers gender, sexuality, literature, film, art and football. Her writing can be found on her blog at and she can be contacted on Twitter @julietjacques.

MAHMOUD RASLAN
Show Hide image

“I was frozen to the spot”: the psychological dangers of autoplay video

Videos that play automatically are now ubiquitous across social media, but the format leaves many viewers vulnerable to harm and distress.

Have you ever seen a dog being skinned alive? Witnessed a child, whimpering for his mother, getting beheaded? Have you watched a man, pinned down by two police offers, being shot multiple times in the chest and back?

A few years ago, if you answered “yes” to these questions, you might have been considered deranged. Possibly, you would have been on a list somewhere, being monitored for seeking out disturbing and illicit videos online. Now, you’re more than likely just a victim of social media’s ubiquitous autoplay function.

No one likes autoplay. Back in the Nineties, homepages often came with their own jaunty background tune that would automatically play, but it didn’t take long for this annoying and invasive practice to die out. Nowadays, when you click on a website plastered with noisy adverts and clips, you immediately click off it. But although users frequently bemoan them, autoplay videos remain a huge business model for social media sites such as Twitter, Facebook, and Tumblr.

That’s fine, of course, when the autoplaying video in question is a bird’s-eye view tutorial on how to make nacho-chicken-pizza-fries (though even then, the videos might be gobbling up your data allowance without your consent). The problem arises when disturbing content is posted by users, and even media outlets, without any warnings or disclaimers.

“There are many incidents where the autoplay feature has affected me negatively,” says Sarah Turi, a 19-year-old college student from Boston, USA. Turi suffers from anxiety, and says that anything related to horror or gore can keep her awake for multiple nights on end. She has previously experienced anxiety attacks after viewing autoplaying horror movie trailers.

“Recently though, many of the videos that begin playing have to do with police brutality or terrorism or riots,” she says. “There was one incident where someone had shared a video of an execution. The video started playing, and before I could scroll away, I watched a man get beheaded by a terrorist organisation. It left me pretty shaken to say the least. I wasn't crying, but I was frozen to the spot. Even just thinking about it now leaves me feeling somewhat ill.”

Dr Dawn Branley, a health and social psychologist specialising in the risks and benefits of internet and technology use, tells me that autoplay videos on social media raise a variety of ethical concerns.

“Social media is more personal in nature compared to news channels and it is also often idly browsed with little conscious effort or concentration, and, as such, users may not be mentally prepared for the sudden appearance of a distressing video,” she says. “Suddenly witnessing a beheading, rape or graphic animal cruelty whilst scrolling through photos of your friends and family, holiday snaps, baby videos and wedding announcements may provide a real shock to the viewer.”

Branley says that, in her line of work, she has spoken to users who have experienced distress at photos of abuse and violence on social media, and speculates that autoplay video could only exacerbate this problem. She also notes that they can trigger vulnerable users, for example, people who suffer from eating disorders or PTSD.

Even those without pre-existing conditions can be negatively affected, however, as anyone who has seen disturbing footage before knows how it can pop into your head intrusively at any time and refuse to budge, remaining plastered to the edges of your skull. Even trolls are aware of this, as some tweet distressing footage at people, aware that it will autoplay.

In January 2015, Facebook responded to these issues by adding warnings to videos users flagged as graphic, meaning the footage wouldn’t autoplay and was preceded by a warning message. Viewers under 18 would be shielded from seeing violent content on their feeds. Yet just over seven months later, in August, autoplay meant thousands inadvertently watched footage of the shooting of TV journalists Alison Parker and Adam Ward.

Remember when I said no one likes autoplay? That’s not strictly true. You have three seconds to scroll away from an autoplaying video before Facebook counts it as a view. In a world where Facebook, and the users of it, are desperate to tally up as many views as possible, autoplay is considered a smart business model.

“Autoplay video originated as a way to capture viewers’ attention and prevent them from ignoring or scrolling past website content,” says Branley. “The autoplaying nature of a video is likely to capture the viewers’ attention and may potentially be harder to resist watching – compared to a static image and text.”

For those profiting, it seems not to matter that some people who can’t look away are viewers like Turi, frozen on the spot by distress.

Because of how profitable autoplay is, then, many news outlets continue to upload sensitive footage that might better be suited on their website – a consensual click away. They might add their own pre-roll warnings, but Branley notes that these are easy to miss if the video is autoplaying. If you were online yesterday, you might have seen this in action, as footage of a boy – or rather the boy – in an ambulance, distressed and bloodied, autoplayed repeatedly across social media.

News outlets have been called out for this before, and have admitted their mistakes. In August 2015, New York Times social media editor Cynthia Collins told The Media Briefing that she wishes she’d added a warning to a video of men being shot and killed at sea. After backlash from their audience, she said:

“If we could do it all again . . . there would have been a discussion about whether or not we should upload the video at all. But if we decided to upload the video I absolutely would have added a warning.”

The video ended up on the website, and viewers had to click through a handful of warnings before watching it. News footage has always had the potential to alarm and distress, but at least in the past viewers had a choice about whether they watched it. Although many news outlets have guidelines on graphic content (such as, for example, the famous breakfast test), these haven’t always been updated for social media.

It’s important that users are made aware of potential solutions to this problem,” says Branley, noting that Facebook and Twitter include options in their settings to turn off autoplay, and your browser or phone may also allow you to disable all autoplay. “However, that does not detract from the moral obligation that internet platforms should consider when introducing autoplay.

“I would suggest that an ‘opt-in’ approach (where users are required to switch on the autoplay setting if they wish to enable this function) would be much more appropriate than the current ‘opt-out’ approach, which requires users to find the settings to switch off autoplay if they do not wish to use it.”  

This seems like the simplest and fairest answer. It’s hard to argue that distressing videos shouldn’t be posted on Facebook – last month, the footage of Philando Castile’s shooting dramatically shed light on police brutality – but it seems only natural that viewers should have a choice about what they watch.

“It is possible that autoplay videos could be used to raise awareness of sensitive topics and/or to grab users' attention for positive reasons like charity campaigns,” says Branley. “However, it is a fine line between raising awareness and distressing the viewer and what one viewer finds acceptable, another may find highly distressing. Therefore, care and consideration is required.”

Right now, both care and consideration are lacking. In its current iteration, autoplay is like Anthony Burgess’ Ludovico technique – pinning our eyes open and forcing us to watch violence and horror without our consent. There are things I know I never want to watch – the curb stomp in American History X, an Armenian weightlifter dislocating their elbow during the Olympics – that could be sprung upon me at any time. Why? Because someone, somewhere, profits.

“I don't think autoplay is necessary in Facebook,” says Turi. “I think that people should decide for themselves whether or not they want to watch something. And yes, I think that it should be banned.”

Amelia Tait is a technology and digital culture writer at the New Statesman.