New Times,
New Thinking.

  1. Culture
  2. Social Media
24 August 2022

Andrew Tate has been banned from social media. Is this a new era of content moderation?

Facebook, Instagram and TikTok’s move to ban the misogynist was surprisingly swift and decisive.

By Sarah Manavis

In the past decade, Facebook and Instagram (both now under Facebook’s umbrella company “Meta”) have developed a reputation for being decidedly lax when dealing with harmful content on their sites. Whether they are responding to the alt-right, anti-vaxxers, virulent misogynists or Donald Trump, these platforms have tended to avoid direct action; their approach to content moderation has been cowardly, responsibility-shirking and like kicking the can down the road.

However, Facebook and Instagram have in recent days taken surprisingly swift and decisive action against two different accounts that were spreading harmful information online. First came the suspension of Andrew Tate, an influencer known to his fans as “the king of toxic masculinity”, after he received a sharp uptick in popularity into August (gaining 70,000 to 100,000 new followers a day in mid-August, with 4.7 million Instagram followers at the time of his ban). He first rose to prominence by appearing on Big Brother in 2016, when he was removed after a video emerged of him beating a woman with a belt (both Tate and the woman in question later said it was consensual). Tate then became a social media personality that promoted extreme, misogynistic ideas – including the belief that men and women are not equal, that women are comparable to animals and property, and that rape victims are to blame for their attacks.

Though his large following dates back several years, Tate’s name only entered the mainstream media a few weeks ago. This was in large part thanks to TikTok – the app’s superfast algorithm helped him reach a broader audience over the summer months. (TikTok also permanently banned Tate’s account on Monday.)

Alongside Tate, Facebook and Instagram last weekend removed the accounts of Children’s Health Defense (CHD), the non-profit of Robert F Kennedy Jr, one of the world’s most prominent anti-vaxxers. Kennedy and CHD were last year cited in the Centre for Countering Digital Hate’s global report into the “Disinformation Dozen” as one of the 12 biggest accounts bolstering the anti-vax movement online. The study urged social media platforms to remove his and CHD’s accounts, saying it could “significantly reduce the amount of disinformation being spread across platforms”.

These bans are a surprise not just because of how long it has taken Facebook to act in the past, but also due to recent statements from the company indicating they may be easing on Covid-19 misinformation. Just last month, Nick Clegg, the company’s president of global affairs, said that there may no longer be a need to remove posts that contain misinformation about things such as masks and the transmissibility of the virus, merely ones that “contribute to a risk of imminent physical harm”. Taking down two major accounts in the space of a few days points to a moderation strategy that is taking harmful content far more seriously than it did 18 months ago.

This is undoubtedly a move in the right direction. But it’s hard to understand how the logic driving these bans is being applied more broadly. Although he was removed from Instagram in early 2021, Kennedy’s personal page is still live on Facebook at the time of writing, with 274,000 likes – with the majority of his most recent posts including links to the CDH website. Equally, Tate is far from the only man on social media promoting dangerous misogyny – and though he has been banned, fan accounts continue to repost clips of Tate online.

The approach of removing big-name accounts to stem the spread of harmful ideas can be a good short-term solution. But it is not a real plan to deal with similarly harmful content that is thriving on these platforms at scale. It has become increasingly obvious how hard it is to moderate a lot of this content on a practical level – particularly images and videos, which are harder to scan than text. Combing through the millions of posts that are uploaded every minute on Facebook and Instagram is a task that neither human moderation nor AI can do thoroughly, meaning platforms only usually consider a ban once a post has gained wider visibility, either through its popularity or through multiple users appealing for that particular account’s suspension.

Give a gift subscription to the New Statesman this Christmas from just £49

However, these challenges don’t excuse the fact that these platforms are hosts to this content – they merely highlight the great lengths these social media sites would need to go to (and the major, expensive changes they would need to adopt) to seriously deal with the dangerous content they are allowing users to publish.

That Tate’s accounts were removed quickly by major social platforms is positive – and a notable change. The issue remains that Tate was able to build this following on these platforms, under their radar, for more than half a decade. Who is the next Andrew Tate, growing their audience on Meta’s platforms right now? And how much damage will they do before Facebook and Instagram take notice?

[See also: Teachers like me know today’s boys are easy targets for Andrew Tate]

Content from our partners
Breaking down barriers for the next generation
How to tackle economic inactivity
"Time to bring housebuilding into the 21st century"