Political middlemen and dart-throwing chimps

Martha Gill's "Irrational Animals" column.

Predicting the weather was once quite an interesting profession, needing skill in reading the instruments, intuition in deciphering the skies and years of experience in putting it all together. Now it’s the kind of job Nick Cage’s character would be given in a heavy-handed satire of the American dream, possibly also starring Michael Caine. We don’t need these skilled individuals any more – computers do all that. We just need an algorithm and a mouthpiece.

And so to Nate Silver – one of the biggest winners of the US presidential election. As the race neared its end, becoming “too close to call”, with money and opinions frantically changing hands, the New York Times blogger was calmly and correctly predicting voter outcome in every single state. He had what others didn’t – a formula to convert polling information into probabilities – and it turned out to be dead-on. He was not alone in getting it right but he was among the few. Many failed spectacularly.

Here’s Newt Gingrich on Fox News on 25 October: “I believe the minimum result will be 53-47 [per cent] Romney, over 300 electoral votes, and the Republicans will pick up the Senate. I base that . . . on just years and years of experience.” And here’s the GOP strategist Karl Rove in the Wall Street Journal on 31 October: “It comes down to numbers. And in the final days of this presidential race, from polling data to early voting, they favour Mitt Romney.”

These were not small errors. These people were standing in pre-hurricane wind and predicting sunshine. Are pundits more often wrong than not, or was it just this particular election that threw them? And how often do the statistics spewed out by experts hit the mark? One study found a statistic for it.

Algorithm blues

In the 1980s, a psychologist called Philip Tetlock took a group of journalists, foreign policy experts and economists – 284 of them – and spent the next two decades bombarding them with questions: would the dotcom bubble burst? Would George Bush be re-elected? How would apartheid end?

After analysing 82,361 predictions, Tetlock found that his experts performed worse than random chance. In short, they could have been beaten by dart-throwing chimps.

The reason was confidence. Tetlock found that the more often pundits appeared on TV, the more likely they were to be wrong. Their strong opinions were causing them to ignore dissenting facts or explain them away, leaving them trapped, he said, in the cage of their preconceptions.

Now, semi-expert middlemen are being squeezed out as the focus shifts to minute data analysis. Silver is one of the winners of this change but on the losing side is a whole industry of political forecasters. And it’s not just true of politics. Finance has been moving that way for a while. In UBS’s recent swath of job cuts, at least one trader, David Gallers, was replaced with an algorithm.

Difficult times for the old school, but what of the new? Silver expressed his concerns to the Wall Street Journal: “You don’t want to influence the system you are trying to forecast.” Only one problem with the new machines, then – accuracy. They’re so good that they might start controlling the weather.

Newt Gingrich opining away on Fox News. Photograph: Getty Images

Martha Gill writes the weekly Irrational Animals column. You can follow her on Twitter here: @Martha_Gill.

This article first appeared in the 19 November 2012 issue of the New Statesman, The plot against the BBC

MAHMOUD RASLAN
Show Hide image

“I was frozen to the spot”: the psychological dangers of autoplay video

Videos that play automatically are now ubiquitous across social media, but the format leaves many viewers vulnerable to harm and distress.

Have you ever seen a dog being skinned alive? Witnessed a child, whimpering for his mother, getting beheaded? Have you watched a man, pinned down by two police offers, being shot multiple times in the chest and back?

A few years ago, if you answered “yes” to these questions, you might have been considered deranged. Possibly, you would have been on a list somewhere, being monitored for seeking out disturbing and illicit videos online. Now, you’re more than likely just a victim of social media’s ubiquitous autoplay function.

No one likes autoplay. Back in the Nineties, homepages often came with their own jaunty background tune that would automatically play, but it didn’t take long for this annoying and invasive practice to die out. Nowadays, when you click on a website plastered with noisy adverts and clips, you immediately click off it. But although users frequently bemoan them, autoplay videos remain a huge business model for social media sites such as Twitter, Facebook, and Tumblr.

That’s fine, of course, when the autoplaying video in question is a bird’s-eye view tutorial on how to make nacho-chicken-pizza-fries (though even then, the videos might be gobbling up your data allowance without your consent). The problem arises when disturbing content is posted by users, and even media outlets, without any warnings or disclaimers.

“There are many incidents where the autoplay feature has affected me negatively,” says Sarah Turi, a 19-year-old college student from Boston, USA. Turi suffers from anxiety, and says that anything related to horror or gore can keep her awake for multiple nights on end. She has previously experienced anxiety attacks after viewing autoplaying horror movie trailers.

“Recently though, many of the videos that begin playing have to do with police brutality or terrorism or riots,” she says. “There was one incident where someone had shared a video of an execution. The video started playing, and before I could scroll away, I watched a man get beheaded by a terrorist organisation. It left me pretty shaken to say the least. I wasn't crying, but I was frozen to the spot. Even just thinking about it now leaves me feeling somewhat ill.”

Dr Dawn Branley, a health and social psychologist specialising in the risks and benefits of internet and technology use, tells me that autoplay videos on social media raise a variety of ethical concerns.

“Social media is more personal in nature compared to news channels and it is also often idly browsed with little conscious effort or concentration, and, as such, users may not be mentally prepared for the sudden appearance of a distressing video,” she says. “Suddenly witnessing a beheading, rape or graphic animal cruelty whilst scrolling through photos of your friends and family, holiday snaps, baby videos and wedding announcements may provide a real shock to the viewer.”

Branley says that, in her line of work, she has spoken to users who have experienced distress at photos of abuse and violence on social media, and speculates that autoplay video could only exacerbate this problem. She also notes that they can trigger vulnerable users, for example, people who suffer from eating disorders or PTSD.

Even those without pre-existing conditions can be negatively affected, however, as anyone who has seen disturbing footage before knows how it can pop into your head intrusively at any time and refuse to budge, remaining plastered to the edges of your skull. Even trolls are aware of this, as some tweet distressing footage at people, aware that it will autoplay.

In January 2015, Facebook responded to these issues by adding warnings to videos users flagged as graphic, meaning the footage wouldn’t autoplay and was preceded by a warning message. Viewers under 18 would be shielded from seeing violent content on their feeds. Yet just over seven months later, in August, autoplay meant thousands inadvertently watched footage of the shooting of TV journalists Alison Parker and Adam Ward.

Remember when I said no one likes autoplay? That’s not strictly true. You have three seconds to scroll away from an autoplaying video before Facebook counts it as a view. In a world where Facebook, and the users of it, are desperate to tally up as many views as possible, autoplay is considered a smart business model.

“Autoplay video originated as a way to capture viewers’ attention and prevent them from ignoring or scrolling past website content,” says Branley. “The autoplaying nature of a video is likely to capture the viewers’ attention and may potentially be harder to resist watching – compared to a static image and text.”

For those profiting, it seems not to matter that some people who can’t look away are viewers like Turi, frozen on the spot by distress.

Because of how profitable autoplay is, then, many news outlets continue to upload sensitive footage that might better be suited on their website – a consensual click away. They might add their own pre-roll warnings, but Branley notes that these are easy to miss if the video is autoplaying. If you were online yesterday, you might have seen this in action, as footage of a boy – or rather the boy – in an ambulance, distressed and bloodied, autoplayed repeatedly across social media.

News outlets have been called out for this before, and have admitted their mistakes. In August 2015, New York Times social media editor Cynthia Collins told The Media Briefing that she wishes she’d added a warning to a video of men being shot and killed at sea. After backlash from their audience, she said:

“If we could do it all again . . . there would have been a discussion about whether or not we should upload the video at all. But if we decided to upload the video I absolutely would have added a warning.”

The video ended up on the website, and viewers had to click through a handful of warnings before watching it. News footage has always had the potential to alarm and distress, but at least in the past viewers had a choice about whether they watched it. Although many news outlets have guidelines on graphic content (such as, for example, the famous breakfast test), these haven’t always been updated for social media.

It’s important that users are made aware of potential solutions to this problem,” says Branley, noting that Facebook and Twitter include options in their settings to turn off autoplay, and your browser or phone may also allow you to disable all autoplay. “However, that does not detract from the moral obligation that internet platforms should consider when introducing autoplay.

“I would suggest that an ‘opt-in’ approach (where users are required to switch on the autoplay setting if they wish to enable this function) would be much more appropriate than the current ‘opt-out’ approach, which requires users to find the settings to switch off autoplay if they do not wish to use it.”  

This seems like the simplest and fairest answer. It’s hard to argue that distressing videos shouldn’t be posted on Facebook – last month, the footage of Philando Castile’s shooting dramatically shed light on police brutality – but it seems only natural that viewers should have a choice about what they watch.

“It is possible that autoplay videos could be used to raise awareness of sensitive topics and/or to grab users' attention for positive reasons like charity campaigns,” says Branley. “However, it is a fine line between raising awareness and distressing the viewer and what one viewer finds acceptable, another may find highly distressing. Therefore, care and consideration is required.”

Right now, both care and consideration are lacking. In its current iteration, autoplay is like Anthony Burgess’ Ludovico technique – pinning our eyes open and forcing us to watch violence and horror without our consent. There are things I know I never want to watch – the curb stomp in American History X, an Armenian weightlifter dislocating their elbow during the Olympics – that could be sprung upon me at any time. Why? Because someone, somewhere, profits.

“I don't think autoplay is necessary in Facebook,” says Turi. “I think that people should decide for themselves whether or not they want to watch something. And yes, I think that it should be banned.”

Amelia Tait is a technology and digital culture writer at the New Statesman.