When it comes to things that Twitter has managed to do right, the list is short and pathetic. Making the aesthetic less clunky, sure. Auto-updates on likes and retweets in real-time? Great! An ability to thread tweets and even post a single thread at once? Fine, a welcome addition.
But one genuinely unsung accomplishment from Twitter is its removal of Islamic State propaganda. The platform has managed to swiftly remove accounts, tweets, and hashtags promoting ISIS radicalisation and has done so successfully enough that the media barely noticed.
So, if Twitter can eradicate content related to one of the biggest waves of online radicalisation in history, why can’t it seem to do so for literally anything else? As VICE’s Motherboard reported on Friday, Twitter is now asking itself this question, and held an all-hands meeting to discuss the removal of specifically white supremacist content on the platform on 22 March.
Yet even though Islamic State content has largely been eradicated, Twitter is finding it a different challenge to get rid of white nationalism. And staff are citing one particular reason: because, if the platform applied the same algorithms that they did to get rid of IS content, the platform would find itself removing and banning Twitter accounts from swathes of prominent Republican politicians in the US.
In the Motherboard report, an employee who attended the meeting said that Twitter is finding it near-impossible to find a way around removing these far-right figures:
“The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.”
This is grim and, let’s face it, very, very funny. But, as a buisness, Twitter does have a fair point. It has in the past cited concerns about the extreme backlash it’d receive if it removed politicians’ accounts (and other popular right-wing accounts), and these are almost certainly correct. When VICE published a report claiming that Republicans were being “shadow-banned” last year, the story was almost immediately debunked. But despite the rapid clarification that this was not happening at all, Twitter remains riddled with right-wing accounts boasting an “X” emoji in their username – the symbol indicating the user thinks their account has been banned.
So any decision to ban and/or remove content from politicians would result in some bad press from the far-right. But then again, there’s a little something called corporate – and, frankly, moral – responsibility. And Twitter, for perhaps the first time ever, has inadvertently admitted that it allows white supremacists to run rampant on their platform in order to avoid an angry response from, well, white supremacists.
While this particular story may be new, Twitter has long been shirking its responsibilities to get rid of harmful content. It has jumped through every possible hoop to avoid banning Nazis – even after saying it was going to ban Nazis – and has a track record of not even knowing what’s been happening on its platform.
While it may display slightly more self-awareness, this problem is no different. And like most issues it’s faced in the past, Twitter will likely continue to jump through hoops to avoid tackling it.