MAHMOUD RASLAN
Show Hide image

“I was frozen to the spot”: the psychological dangers of autoplay video

Videos that play automatically are now ubiquitous across social media, but the format leaves many viewers vulnerable to harm and distress.

Have you ever seen a dog being skinned alive? Witnessed a child, whimpering for his mother, getting beheaded? Have you watched a man, pinned down by two police offers, being shot multiple times in the chest and back?

A few years ago, if you answered “yes” to these questions, you might have been considered deranged. Possibly, you would have been on a list somewhere, being monitored for seeking out disturbing and illicit videos online. Now, you’re more than likely just a victim of social media’s ubiquitous autoplay function.

No one likes autoplay. Back in the Nineties, homepages often came with their own jaunty background tune that would automatically play, but it didn’t take long for this annoying and invasive practice to die out. Nowadays, when you click on a website plastered with noisy adverts and clips, you immediately click off it. But although users frequently bemoan them, autoplay videos remain a huge business model for social media sites such as Twitter, Facebook, and Tumblr.

That’s fine, of course, when the autoplaying video in question is a bird’s-eye view tutorial on how to make nacho-chicken-pizza-fries (though even then, the videos might be gobbling up your data allowance without your consent). The problem arises when disturbing content is posted by users, and even media outlets, without any warnings or disclaimers.

“There are many incidents where the autoplay feature has affected me negatively,” says Sarah Turi, a 19-year-old college student from Boston, USA. Turi suffers from anxiety, and says that anything related to horror or gore can keep her awake for multiple nights on end. She has previously experienced anxiety attacks after viewing autoplaying horror movie trailers.

“Recently though, many of the videos that begin playing have to do with police brutality or terrorism or riots,” she says. “There was one incident where someone had shared a video of an execution. The video started playing, and before I could scroll away, I watched a man get beheaded by a terrorist organisation. It left me pretty shaken to say the least. I wasn't crying, but I was frozen to the spot. Even just thinking about it now leaves me feeling somewhat ill.”

Dr Dawn Branley, a health and social psychologist specialising in the risks and benefits of internet and technology use, tells me that autoplay videos on social media raise a variety of ethical concerns.

“Social media is more personal in nature compared to news channels and it is also often idly browsed with little conscious effort or concentration, and, as such, users may not be mentally prepared for the sudden appearance of a distressing video,” she says. “Suddenly witnessing a beheading, rape or graphic animal cruelty whilst scrolling through photos of your friends and family, holiday snaps, baby videos and wedding announcements may provide a real shock to the viewer.”

Branley says that, in her line of work, she has spoken to users who have experienced distress at photos of abuse and violence on social media, and speculates that autoplay video could only exacerbate this problem. She also notes that they can trigger vulnerable users, for example, people who suffer from eating disorders or PTSD.

Even those without pre-existing conditions can be negatively affected, however, as anyone who has seen disturbing footage before knows how it can pop into your head intrusively at any time and refuse to budge, remaining plastered to the edges of your skull. Even trolls are aware of this, as some tweet distressing footage at people, aware that it will autoplay.

In January 2015, Facebook responded to these issues by adding warnings to videos users flagged as graphic, meaning the footage wouldn’t autoplay and was preceded by a warning message. Viewers under 18 would be shielded from seeing violent content on their feeds. Yet just over seven months later, in August, autoplay meant thousands inadvertently watched footage of the shooting of TV journalists Alison Parker and Adam Ward.

Remember when I said no one likes autoplay? That’s not strictly true. You have three seconds to scroll away from an autoplaying video before Facebook counts it as a view. In a world where Facebook, and the users of it, are desperate to tally up as many views as possible, autoplay is considered a smart business model.

“Autoplay video originated as a way to capture viewers’ attention and prevent them from ignoring or scrolling past website content,” says Branley. “The autoplaying nature of a video is likely to capture the viewers’ attention and may potentially be harder to resist watching – compared to a static image and text.”

For those profiting, it seems not to matter that some people who can’t look away are viewers like Turi, frozen on the spot by distress.

Because of how profitable autoplay is, then, many news outlets continue to upload sensitive footage that might better be suited on their website – a consensual click away. They might add their own pre-roll warnings, but Branley notes that these are easy to miss if the video is autoplaying. If you were online yesterday, you might have seen this in action, as footage of a boy – or rather the boy – in an ambulance, distressed and bloodied, autoplayed repeatedly across social media.

News outlets have been called out for this before, and have admitted their mistakes. In August 2015, New York Times social media editor Cynthia Collins told The Media Briefing that she wishes she’d added a warning to a video of men being shot and killed at sea. After backlash from their audience, she said:

“If we could do it all again . . . there would have been a discussion about whether or not we should upload the video at all. But if we decided to upload the video I absolutely would have added a warning.”

The video ended up on the website, and viewers had to click through a handful of warnings before watching it. News footage has always had the potential to alarm and distress, but at least in the past viewers had a choice about whether they watched it. Although many news outlets have guidelines on graphic content (such as, for example, the famous breakfast test), these haven’t always been updated for social media.

It’s important that users are made aware of potential solutions to this problem,” says Branley, noting that Facebook and Twitter include options in their settings to turn off autoplay, and your browser or phone may also allow you to disable all autoplay. “However, that does not detract from the moral obligation that internet platforms should consider when introducing autoplay.

“I would suggest that an ‘opt-in’ approach (where users are required to switch on the autoplay setting if they wish to enable this function) would be much more appropriate than the current ‘opt-out’ approach, which requires users to find the settings to switch off autoplay if they do not wish to use it.”  

This seems like the simplest and fairest answer. It’s hard to argue that distressing videos shouldn’t be posted on Facebook – last month, the footage of Philando Castile’s shooting dramatically shed light on police brutality – but it seems only natural that viewers should have a choice about what they watch.

“It is possible that autoplay videos could be used to raise awareness of sensitive topics and/or to grab users' attention for positive reasons like charity campaigns,” says Branley. “However, it is a fine line between raising awareness and distressing the viewer and what one viewer finds acceptable, another may find highly distressing. Therefore, care and consideration is required.”

Right now, both care and consideration are lacking. In its current iteration, autoplay is like Anthony Burgess’ Ludovico technique – pinning our eyes open and forcing us to watch violence and horror without our consent. There are things I know I never want to watch – the curb stomp in American History X, an Armenian weightlifter dislocating their elbow during the Olympics – that could be sprung upon me at any time. Why? Because someone, somewhere, profits.

“I don't think autoplay is necessary in Facebook,” says Turi. “I think that people should decide for themselves whether or not they want to watch something. And yes, I think that it should be banned.”

Amelia Tait is a technology and digital culture writer at the New Statesman.

Flickr: woodleywonderworks
Show Hide image

Lol enforcement: meet the man policing online joke theft

A story of revenge, retweets, and Kale Salad. 

A man walks into a bar and he tells a joke. The man next to him laughs – and then he tells the same joke. The man next to him, in turn, repeats the joke. That bar’s name is Twitter.

If you’ve been on the social network for more than five minutes, you’ll notice that joke theft is rampant on the site. Search, for example, for a popular tweet this week (“did everyone just forget about the part of 2016 when literal clowns would chase people with knives in public and nobody really did anything” – 153,000 retweets) and you’ll see it has been copied 53 times in the last three days.

One instance of plagiarism, however, is unlike the others. Its perpetrator is the meme account @dory and its quick Ctrl+C, Ctrl+V has over 3,500 retweets. This account frequently copies the viral posts of Twitter users and passes them off – word for word – as its own. Many similar accounts do the same, including @CWGirl and @FatJew, and many make money by promoting advertising messages to their large number of followers. Twitter joke theft, then, is profitable.

In 2015, Twitter promised to clamp down on the unchecked plagiarism on its site. “This Tweet from [user] has been withheld in response to a report from the copyright holder,” read a message meant to replace stolen jokes on the site. It’s likely a message you’ve never seen.

Dissatisfied with this solution, one man took it upon himself to fight the thieves. 

“I'm a like happy internet kind of guy,” says Samir Mezrahi, a 34-year-old from New York who runs the Twitter account @KaleSalad. For the last six months, Mezrahi has used the account to source and retweet the original writers of Twitter jokes. Starting with a few hundred followers at the end of December 2016, Mezrahi had jumped to 50,000 followers by January 2017. Over 82,000 people now follow his account.  

“I've always been a big fan of like viral tweets and great tweets,” explains Mezrahi, over the sound of his children watching cartoons in the background. “A lot of people were fed up with the meme accounts so it’s just like a good opportunity to reward creators and people.”

Samir Mezrahi, owner of @KaleSalad

I had expected Mezrahi to be a teen. In actual fact he is a father of three and an ex-Buzzfeed employee, who speaks in a calm monotone, yet is enthusiastic about sharing the best content on Twitter. Though at first sourcing original tweets for Kale Salad was hard work, people now approach Mezrahi for help.

“People still reach out to me looking for vindication and just that kind of, I don’t know, that kind of acknowledgement that they were the originals. Because all so often the meme accounts are much larger and their tweets do better than the stolen tweet.”

But just why does having a tweet stolen suck so much? In the grand scheme of things, does it matter? Did everyone just forget about the part of 2016 when literal clowns would chase people with knives in public and nobody really did anything?

Meryl O’Rourke is a comedian and writer who tweets at @MerylORourke, and now has a copyright symbol (©) after her Twitter name. In the past she has had her jokes stolen and reposted, unattributed, on Facebook and Twitter and hopes this symbol will go some way to protecting her work.

“It’s hard to explain how it felt... as a struggling writer you’re always waiting for anything that looks like recognition as it could lead to your break,” she explains. “When your work gains momentum you feel like your opportunity ran off without you.

“Twitter is a test of a writer’s skill. To spend time choosing exactly the right words to convey your meaning with no nuance or explanation, and ensure popularity and a chuckle, in the space of only 140 characters – that’s hard work.”

However, Mezrahi has found not everyone is bothered by their tweets being stolen. I found the same man I reached out to with a stolen tweet who said he didn’t want to speak to me because it felt too “first world problems” to complain. Writers like O’Rourke are naturally more annoyed than random teenagers, who Mezrahi says are normally actually pleased about the theft.

“If you go to [a teenager’s] timeline it’s always the same thing. They’re replying to all their friends saying like ‘I’m famous’, they’re retweeting the meme accounts saying like ‘I did it’… they don’t mind as much it seems. It’s kind of like a badge of honour to them.”

Sometimes, people even ask Kale Salad to unretweet their posts. College students with scholarships, in particular, might not actually want to go viral – or some viral tweets may accidentally include personal information. On the whole, however, people are grateful for his work.

Yet the Kale Salad account does have unintended consequences. Mezrahi has now been blocked by the major meme accounts that frequently steal jokes, meaning he had to create alternate accounts to view their content. But just because he can’t see them doesn’t mean they don’t see him – and he has noticed that these accounts now actually come to his profile to steal jokes he has retweeted, in a strange role-reversal.

“There are definitely times when they're picking up things that I just retweeted, like I know they're like looking at me too,” he says. “It feels like vindicated or validated that they come to me.”

Mezrahi now works in social media on a freelance basis, but would be open to making Kale Salad profitable. Earlier this year he set up an account on Patreon – a site that allows fans to pay their favourite creators. Some people didn’t approve of this, tweeting to say he is “just retweeting tweets”. So far, Mezrahi has three patrons who pay him $50 (£39) a month.

“I mean I spend a certain amount of time on this and I think it’s a pretty good service, so I've been thinking about monetisation and thought that might be a route,” he explains. He believes he is providing an important service by “amplifying” creators, and he didn’t want to make money in less transparent ways, such as by posting sponsored advertisements on his account. Yet although many online love Kale Salad, they don’t, as of yet, want to pay him.

“Twitter should buy my account because I’m doing a good thing that people like every day,” he muses.

Many might still be sceptical of the value of a joke vigilante. For those whose jokes aren’t their bread or butter, tweet theft may seem like a very minimal problem. And although it arguably is, it’s still incredibly annoying. Writing in Playboy, Rob Fee explains it best:

“How upsetting is it when you tell a joke quietly in a group of friends, then someone else says it louder and gets a huge laugh? Now imagine your friend following you every day listening for more jokes because people started throwing money at him every time he repeated what you said. Also, that friend quit his job because he made enough to live comfortably by telling your jokes louder than you can. Odds are, you’d quickly decide to find new friends.”

For now, then, Kale Salad will continue his work as the unpaid internet police. “As long as people like the service, I don’t mind doing it. If that's a year or two years or what we'll see how the account goes,” he says.

“Twitter is fun and I like the fun days on the internet and I like to help contribute to that.

“The internet is for fun and not all the sadness that’s often there.”

Amelia Tait is a technology and digital culture writer at the New Statesman.

0800 7318496