Behind the Wikipedia wars: what happened when Bradley Manning became Chelsea

Abigail Brady, who edits the site as Morwen, explains the polite notes and not-votes behind the scenes.

Shortly after Chelsea Manning's statement regarding her transition was made public, the Wikipedia page for Bradley Manning was redirected. The article now consistently refers to Chelsea by her chosen name and pronoun, showing more understanding of the issues at hand than many more traditional news sources. But the move wasn't without friction. A glance at the articles talk page, where editors discuss changes, shows an argument in full flow. "This PC-driven move lowers wikipedia's credibility even further", writes one opposing the change."A person's gender identity is their choice to make," says another, supporting the move.

Abigail Brady, who is on the site under the name Morwen, was the admin who first made the move. I spoke to her about Manning, wikipedia, and edit wars in general.

How long have you been a Wikipedia editor? What drew you to the site?

I recently celebrated the 10th anniversary of my first registered edit to Wikipedia. I started an article about the old East German Parliament, oddly enough – this was back when Wikipedia was rapidly ascending in the search rankings for lots of hits, and it was often hard to find good factual stuff on other sites. Adding things to Wikipedia that I knew about – and there was a lot of UK geography and politics that was not covered at all – was a massively exciting task. I got made an admin pretty quickly – it was a much less paranoid process back then.

Have you had experiences beforehand which prepared you for a debate this ferocious?

To some extent yes. I have stayed out of trans issues on Wikipedia for ages, but you should see the arguments we had about Star Trek ranks!

Do you get involved with trans issues on the site more generally, or is it just for this case?

My memory is a bit fuzzy, but, for example, I started the article on April Ashley, and took a relatively hard line against people mucking around with it and other such articles. I think I can claim some credit for the current "use identity" thing being the style guide. More recently, I was involved in a dispute other whether the article "cisgender" should exist.

Was your first thought upon reading Chelsea's statement to move the page? How much time passed between you finding out and you making the edit?

There's an essay on the site called WP:RECENT which cautions about being too quick to update. We are, after all, building an encyclopaedia not a news source. But as I read the statement I saw how completely unambiguous it was. There had been discussions about this before, which I was aware of but did not participate in (in fact, it was an FAQ on the talk page before yesterday). So I posted on the talk page, saw that someone else was making the same suggestion that I did, held off for a little while and a small consensus emerged, and then pressed the button. I thought I was giving it plenty of time, given how clear that release was!

Is there a culture on the site of trying to be the first to update pieces with news? Did that motivate you?

There is a bit of a friendly rivalry about being the first to update, but it's not taken too seriously, and I'd never consider putting stuff in there against my editorial judgement, such as it is, just to claim credit. I created the articles about the 7/7 bombings back in the day and, bizarrely, got a radio interview off it (I predicted twitter's role in grassroots news gathering!). I'm fairly inactive now but also took a role updating the article about the Leveson report, when that came out, and dealing with the page about a Baron McAlpine. So I came out of the woodwork, because I don't mind stepping on the landmines. Mixed metaphor there, sorry.

What's Wikipedia's policy on people transitioning? Who sets that policy?

Wikipedia's policy according to MOS:IDENTITY, and long-established practice, is to use preferred name and pronouns for the entirety of a life. I like that policy. Policies emerge through a kind of consensus-building process which would probably horrify you if you looked at it in too much detail, but generally seem to end up pretty well.

What happened immediately after you made the move?

Someone reverted it back, pretty quickly. But I left them a polite note, asking if they'd actually read the reference I'd given, and it turns out they hadn't, and they apologised. So I put it back. I've made it a policy never to actively get involved in an "edit war", after several annoying experiences in the past. So I've stayed off the page proper since then, and confined myself to talk.

After that short squabble, discussion moved to the talk page. How did that go?

Someone said I never should have done the move in the first place. We have a policy of "being bold", but they said this didn't apply here and I should have done a "requested move" first, which is a consensus-gathering approach. (Wikipedia policies are great in the same way standards are – there's one for every occasion and line of argument). So now we are having a "not vote", as we call it, where people say whether they support or oppose the move (that is, the move back), and outline their reasoning.

It seems the page is full of the professionally outraged. Do you think they really are aiming at making the best encyclopaedia possible?

I honestly don't know. Many of them are raising the same old points, over and over again, like they are novel. Yes, there's a background of transphobia to a lot of this, but I think a lot is people driving by and insisting on having their opinion on the raging topic of the day. Someone has come forward already and volunteered to look at the argument and try and determine some kind of consensus from it (hah), and they're going to have their work cut out for them, but they're supposed to look at the actual debate, rather than just weigh the number of randoms who have expressed their opinions, bigoted or not.

The page firstly got "semi-protected", then "protected". What does that mean? Will it end?

Protected means that technically nobody can edit the page apart from an admin. However, that doesn't mean admins are allowed to edit freely, they are only allowed to make clearly good edits (like typos) or ones that have requested and consented to by general users on the talk page. Semi-protection is when we lock down new accounts and people who aren't logged in from editing. Both things are done as a temporary measure while things cool down – and the few incidents where the "temporary" becomes basically permanent are considered regrettable. The traditional description of admin powers on Wikipedia is that it is a janitorial function. It doesn't give you any authority in itself, it just shows that the community has trusted you a bit more to use the software responsibly. Obviously, there's an extent to which that is an ambition rather than an absolute description of reality, and the admins do have certain amount of de facto power, but people who misuse it can and have been removed.

Chelsea Manning's Wikipedia page.

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

MAHMOUD RASLAN
Show Hide image

“I was frozen to the spot”: the psychological dangers of autoplay video

Videos that play automatically are now ubiquitous across social media, but the format leaves many viewers vulnerable to harm and distress.

Have you ever seen a dog being skinned alive? Witnessed a child, whimpering for his mother, getting beheaded? Have you watched a man, pinned down by two police offers, being shot multiple times in the chest and back?

A few years ago, if you answered “yes” to these questions, you might have been considered deranged. Possibly, you would have been on a list somewhere, being monitored for seeking out disturbing and illicit videos online. Now, you’re more than likely just a victim of social media’s ubiquitous autoplay function.

No one likes autoplay. Back in the Nineties, homepages often came with their own jaunty background tune that would automatically play, but it didn’t take long for this annoying and invasive practice to die out. Nowadays, when you click on a website plastered with noisy adverts and clips, you immediately click off it. But although users frequently bemoan them, autoplay videos remain a huge business model for social media sites such as Twitter, Facebook, and Tumblr.

That’s fine, of course, when the autoplaying video in question is a bird’s-eye view tutorial on how to make nacho-chicken-pizza-fries (though even then, the videos might be gobbling up your data allowance without your consent). The problem arises when disturbing content is posted by users, and even media outlets, without any warnings or disclaimers.

“There are many incidents where the autoplay feature has affected me negatively,” says Sarah Turi, a 19-year-old college student from Boston, USA. Turi suffers from anxiety, and says that anything related to horror or gore can keep her awake for multiple nights on end. She has previously experienced anxiety attacks after viewing autoplaying horror movie trailers.

“Recently though, many of the videos that begin playing have to do with police brutality or terrorism or riots,” she says. “There was one incident where someone had shared a video of an execution. The video started playing, and before I could scroll away, I watched a man get beheaded by a terrorist organisation. It left me pretty shaken to say the least. I wasn't crying, but I was frozen to the spot. Even just thinking about it now leaves me feeling somewhat ill.”

Dr Dawn Branley, a health and social psychologist specialising in the risks and benefits of internet and technology use, tells me that autoplay videos on social media raise a variety of ethical concerns.

“Social media is more personal in nature compared to news channels and it is also often idly browsed with little conscious effort or concentration, and, as such, users may not be mentally prepared for the sudden appearance of a distressing video,” she says. “Suddenly witnessing a beheading, rape or graphic animal cruelty whilst scrolling through photos of your friends and family, holiday snaps, baby videos and wedding announcements may provide a real shock to the viewer.”

Branley says that, in her line of work, she has spoken to users who have experienced distress at photos of abuse and violence on social media, and speculates that autoplay video could only exacerbate this problem. She also notes that they can trigger vulnerable users, for example, people who suffer from eating disorders or PTSD.

Even those without pre-existing conditions can be negatively affected, however, as anyone who has seen disturbing footage before knows how it can pop into your head intrusively at any time and refuse to budge, remaining plastered to the edges of your skull. Even trolls are aware of this, as some tweet distressing footage at people, aware that it will autoplay.

In January 2015, Facebook responded to these issues by adding warnings to videos users flagged as graphic, meaning the footage wouldn’t autoplay and was preceded by a warning message. Viewers under 18 would be shielded from seeing violent content on their feeds. Yet just over seven months later, in August, autoplay meant thousands inadvertently watched footage of the shooting of TV journalists Alison Parker and Adam Ward.

Remember when I said no one likes autoplay? That’s not strictly true. You have three seconds to scroll away from an autoplaying video before Facebook counts it as a view. In a world where Facebook, and the users of it, are desperate to tally up as many views as possible, autoplay is considered a smart business model.

“Autoplay video originated as a way to capture viewers’ attention and prevent them from ignoring or scrolling past website content,” says Branley. “The autoplaying nature of a video is likely to capture the viewers’ attention and may potentially be harder to resist watching – compared to a static image and text.”

For those profiting, it seems not to matter that some people who can’t look away are viewers like Turi, frozen on the spot by distress.

Because of how profitable autoplay is, then, many news outlets continue to upload sensitive footage that might better be suited on their website – a consensual click away. They might add their own pre-roll warnings, but Branley notes that these are easy to miss if the video is autoplaying. If you were online yesterday, you might have seen this in action, as footage of a boy – or rather the boy – in an ambulance, distressed and bloodied, autoplayed repeatedly across social media.

News outlets have been called out for this before, and have admitted their mistakes. In August 2015, New York Times social media editor Cynthia Collins told The Media Briefing that she wishes she’d added a warning to a video of men being shot and killed at sea. After backlash from their audience, she said:

“If we could do it all again . . . there would have been a discussion about whether or not we should upload the video at all. But if we decided to upload the video I absolutely would have added a warning.”

The video ended up on the website, and viewers had to click through a handful of warnings before watching it. News footage has always had the potential to alarm and distress, but at least in the past viewers had a choice about whether they watched it. Although many news outlets have guidelines on graphic content (such as, for example, the famous breakfast test), these haven’t always been updated for social media.

It’s important that users are made aware of potential solutions to this problem,” says Branley, noting that Facebook and Twitter include options in their settings to turn off autoplay, and your browser or phone may also allow you to disable all autoplay. “However, that does not detract from the moral obligation that internet platforms should consider when introducing autoplay.

“I would suggest that an ‘opt-in’ approach (where users are required to switch on the autoplay setting if they wish to enable this function) would be much more appropriate than the current ‘opt-out’ approach, which requires users to find the settings to switch off autoplay if they do not wish to use it.”  

This seems like the simplest and fairest answer. It’s hard to argue that distressing videos shouldn’t be posted on Facebook – last month, the footage of Philando Castile’s shooting dramatically shed light on police brutality – but it seems only natural that viewers should have a choice about what they watch.

“It is possible that autoplay videos could be used to raise awareness of sensitive topics and/or to grab users' attention for positive reasons like charity campaigns,” says Branley. “However, it is a fine line between raising awareness and distressing the viewer and what one viewer finds acceptable, another may find highly distressing. Therefore, care and consideration is required.”

Right now, both care and consideration are lacking. In its current iteration, autoplay is like Anthony Burgess’ Ludovico technique – pinning our eyes open and forcing us to watch violence and horror without our consent. There are things I know I never want to watch – the curb stomp in American History X, an Armenian weightlifter dislocating their elbow during the Olympics – that could be sprung upon me at any time. Why? Because someone, somewhere, profits.

“I don't think autoplay is necessary in Facebook,” says Turi. “I think that people should decide for themselves whether or not they want to watch something. And yes, I think that it should be banned.”

Amelia Tait is a technology and digital culture writer at the New Statesman.