New Times,
New Thinking.

  1. Science & Tech
15 December 2020updated 21 Sep 2021 6:13am

Extreme radicalisation is happening on Facebook and YouTube – what can be done to stop it?

Mainstream social media platforms served as an extremist breeding ground for the Christchurch shooter. Only fundamental change will prevent similar tragedies.

By Sarah Manavis

One boring summer afternoon when I was a kid, my sister and I pestered my dad to take us somewhere new. He thought for a moment and asked if we wanted to go to a new park nearby. We couldn’t believe there was a park in our neighbourhood that we had never heard of before. My dad told us to think about it. 

“Dell Park,” he said. “That’s the name. Can you remember where it is?” We immediately started hypothesising, spending what was probably half an hour discussing the different niche parts of our small town that the park could possibly be in. After combing through all of our options we gave up and he said he’d take us. We walked 90 seconds from our house and my dad stopped and pointed at a street sign – “Dell Park”. We’d somehow forgotten it was the name of the road parallel to ours.

My dad rang our neighbour’s doorbell and they both laughed as he told the joke. While we were busy chasing our first instinct, we missed the answer that was in front of us. What we learned that day wasn’t that one type of answer is what often turns out to be right, but how easy it is to become distracted from the truth by what feels like the obvious solution.

On 8 December 2020, a New Zealand Royal Commission report into the Christchurch terrorist attack was released. It examined the two successive shootings which took place in March 2019 and led to 51 deaths and 40 injuries across two mosques. The attacks rapidly became the subject of global media attention, not only because of their gruesome, racially-targeted nature, but because of the perpetrator’s links to hard-right online groups. The shooter livestreamed the first 17 minutes of the attack on Facebook, during which he declared: “Remember lads, subscribe to PewDiePie”, the catchphrase of a subscription campaign from YouTuber Felix Kjellberg. Prior to the attack the shooter uploaded a 74-page “manifesto” to the notorious chat forum 8chan. The manifesto was riddled with ironic online in-jokes and many suspected it to be a long “shitpost”  when someone posts something typically nonsensical, surreal or ironic online in order to bait people into a reaction. 

At the time, commentators were quick to pin the shooter’s radicalisation on far-right internet culture – writers, myself included, wrote explainers that linked the shooter’s demeanour to long-running memes on insidious, niche platforms. The shooting was even partially attributed with the overall demise of 8chan, which was taken offline when its security firm, Cloudflare, withdrew its protection after two more mass shootings were linked to the site. 

Select and enter your email address The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

These assertions, until last week, were educated guesses. Based on the memes the attacker deployed and where he published his manifesto, it seemed likely he was another man lured in by sites propagating white supremacy. But although these guesses were well-informed, they have now been proved only half true: the Royal Commission’s report found that, despite the shooter’s association with these sites, he was radicalised on Facebook and YouTube.

In a statement following the report’s release, New Zealand’s prime minister Jacinda Ardern apologised for security “failings”, noting that the terrorist was “not a frequent commentator” on nefarious sites and that he found YouTube a “significant source of information and inspiration”.  

“This is a point I plan to make directly to the leadership of YouTube,” she said. 

The report also included other details of the shooter’s internet history, such as his visits to “extremist” Facebook group, “The Lads Society”, and his significant donations to several far-right organisations. He visited sites such as 4chan and 8chan, but was not an active participant.

Over the next few weeks and months, the online spaces that the Christchurch shooter spent most time on – the specific channels, videos, posts and Facebook pages he followed – will come under greater scrutiny. This important work has the potential to prevent a handful of people from being similarly radicalised. But the Christchurch case represents another conversation that we are having too late: that radicalisation isn’t just happening in insidious little corners of the internet. It’s happening in the open on mainstream platforms that most of us are using. 

[See also: New Zealand’s report on far-right terrorism shows why politicians must no longer ignore the threat​

***

For a long time, both YouTube and Facebook have been able to shy away from this uncomfortable truth. While both would admit they have problems with misinformation and conspiracy theories existing on their platform, they can deal with such cases half-heartedly and on a one-off basis. But now there is proof that the most extreme user journey on these sites isn’t simply someone being fooled by misinformation or getting hooked on a single conspiracy theory. It’s a seismic shift in worldview, the end point of which is genuine harm and the death of innocent people. It is happening this very second, twisting people’s brain chemistry towards radical, dangerous ideas – a trajectory that can rarely be reversed. 

Reputation has arguably played the biggest role in distorting this reality. It’s easy to brand 4chan as a white supremacist site when racial slurs appear on almost every page; it’s less easy to do so on a site like YouTube, largely associated with influencers such as Zoella and KSI. Facebook, by contrast, has suffered reputational damage from the Cambridge Analytica scandal and rampant misinformation. However, it gets away with being branded a fake news hub – not a place where people become neo-Nazis. While not entirely positive, a much darker brand image is skirted.

Both YouTube and Facebook can ultimately pin the blame on 8chan, 4chan, or even Reddit, where users’ language is often more extreme and violent ideation is more vivid. The larger sites could argue,that although they may be the starting point for a users’ radicalisation, they aren’t the real cause. But after the New Zealand report, this defence has lost all credibility. It is now incumbent on Facebook and YouTube to take a stance – something that both platforms have long avoided.

[See also: How the alt-right is pivoting to TikTok]

Neutrality and apoliticism have for years been convenient get-out clauses, allowing the sites to assuage critics until the next inevitable crisis. But through such behaviour, Facebook and YouTube have allowed what was once a serious but manageable problem to morph into something far worse. On YouTube, radicalisation is so fundamental to some users’ experiences that it will take an extraordinary set of changes to eliminate this risk. The site must be restructured or, in reality, shut down to prevent a significant share of users from suffering this fate.

In the immediate term, however, YouTube could make fundamental changes to its terms and conditions with the aim of reducing hard-right, white supremacist content. It could change its infamous algorithm, which offers a conveyor belt of progressively more extreme content after each video viewed. But these changes would require the site to sacrifice a significant part of what keeps users returning. 

YouTube knows it’s easier – and financially simpler – to acknowledge each case in isolation and then move on. So this is what it does: it tweaks its algorithm, adjusts its guidelines, and hails its action, while in reality doing almost nothing to prevent radicalisation. As tech writer Becca Lewis wrote last week, “There is often a great disconnect between what actions YouTube says it is taking and what users and creators actually experience… The great irony is that by attempting to stay apolitical, YouTube consistently makes the political choice not to care about or protect vulnerable communities.”

Another reason why this conversation feels impossible is because deradicalisation is hard to achieve. Preventing radicalisation is something we know how to do – but what about the tens of millions of people who have already gone over the edge? The truth is that we know little about how to help those who have suffered this fate; there are no basic guidelines or even complex steps to follow. And if YouTube takes responsibility, it will be at least partially left to the sites to find that elusive solution.

With this in mind, we must do what we can to force the hand of the tech companies we now know are at fault. The narrative that radicalisation happens on the internet’s worst backwaters is only one part of the story. The sooner we acknowledge that radicalisation is happening on the platforms most of us know and use, the sooner changes to prevent tragedies such as Christchurch can happen. Only radical action will ensure that social media platforms are no longer extremist breeding grounds.

[See also: How it feels to escape the far right]

Content from our partners
Can green energy solutions deliver for nature and people?
"Why wouldn't you?" Joining the charge towards net zero
The road to clean power 2030