New Times,
New Thinking.

The Research Brief: Why harmful social media content needs stronger regulation

Your weekly dose of policy thinking.

By Spotlight

Welcome to the Research Brief, where Spotlight, the New Statesman’s policy section, brings you the pick of recent publications from the government, and the think tank, charity and NGO world. See more editions of the Research Brief here.

What are we talking about this week? A report looking into the prevalence and negative effects of harmful content found on social media platforms such as TikTok, Instagram and Pinterest on young people. The publishers describe it as the “first of its kind”. It’s called Preventable yet pervasive: The prevalence and characteristics of harmful content, including suicide and self-harm material, on Instagram, TikTok and Pinterest – the research undertaken by the Molly Rose Foundation charity and Bright Data was released on October 29. 

What’s the backstory? Molly Russell, a young woman from Harrow, London, died aged 14 following an act of self-harm in 2017. She had suffered from depression, and consumed a host of “graphic” content that featured self-harm and suicide on social media. In 2022, an inquest into her death ruled that the negative social media content that Russell consumed had contributed “more than minimally” to her death. The Molly Rose Foundation was set up in Russell’s memory to raise suicide prevention awareness and help young people with support.

Social media algorithms deliver tailored content to users and can provide seemingly endless opportunities for procrastination and scrolling. But they can also be much, much worse, feeding vulnerable people a myriad of harmful content featuring depression, self-harm and suicide ideation.

The consumption of such content has been linked to instances of self-harm and suicides, particularly among young people. The report cites recent research which notes that suicide-related internet use has been reported in almost a quarter (24 per cent) of deaths by suicide among young people aged ten to 19 – equivalent to 43 lost lives every year.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

What did the researchers find? That harmful content is not only easily accessible, but that it is “routinely recommended” and is available in “large volumes” to users across a number of social media platforms. 

Researchers found that two thirds of the most engaged posts on Instagram using well-known suicide and self-harm hashtags contained material that promoted or glorified the acts. Likewise, on TikTok, nearly half (49 per cent) of the most engaged posts using suicide and self-harm hashtags contained material that either promoted or glorified suicide and self-harm, or contained “intense themes of misery, hopelessness or depression”.

[See also: Dropping the Mental Health Bill is yet another broken Tory promise]

Such content has “the potential to cause harmful effects, particularly if viewed in large amounts or cumulatively over time”, the researchers argue. They also add that the situation is the result of a “significant fundamental system failings by leading social media giants in handling self-harm and suicide content”.

And this kind of content has a wide reach. Over half (54 per cent) of the most engaged posts with harmful content on TikTok, for example, had been viewed over a million times. The “poorly conceived design features and high-risk algorithms” that power social media platforms use a combination of user prompts, recommended search terms and other features to maximise user engagement, the researchers note, but have not “adequately addressed” the potential for users to be swamped in harmful content.

What are the researchers pushing for? For Ofcom, the regulator responsible for all telecommunications, to introduce a strong framework that ensures “tech platforms pay adequate attention to the risks associated with how content is likely to be consumed, including as a direct consequence of algorithms and other high-risk design features”.

Ofcom is responsible for holding social media companies to account over online content, following the passing of the long-trailed Online Safety Bill in October. For years, draft versions of the bill required platforms to prevent children from viewing “legal but harmful” content, including self-harm and suicide. That was dropped earlier this year; instead, social media companies will need to give users more control to filter out harmful content they don’t want to see.

Ian Russell, Molly’s father, said: “Our findings show the scale of the challenge facing Ofcom and underline the need for them to establish bold and ambitious regulation that delivers stronger safety standards and protects young lives.”

In a sentence? Harmful social media content is being fed to people via highly-tuned algorithms with poorly-designed mitigation systems – Ofcom must develop a regulatory framework that compels platforms to take meaningful action to stop the spread of dangerous content.

Read the full report from the Molly Rose Foundation here.

Content from our partners
Can green energy solutions deliver for nature and people?
"Why wouldn't you?" Joining the charge towards net zero
The road to clean power 2030