New Times,
New Thinking.

Why everyone loses if Westminster passes poor online regulation

Pushing the Online Safety Bill through parliament too quickly will do more harm than good.

By Kir Nuthi

Broadsheets and online news sites have been filled of late with Conservative and Labour politicians discussing the future of the Online Safety Bill. While support for the legislation exploded in the past three years as policymakers drafted and redrafted it, the current version is attracting significant criticism from opponents, who argue that it sacrifices too many civil liberties in pursuit of safety from online harms.

These concerns are absolutely justified, which is why parliament needs to significantly amend the Online Safety Bill to better balance free expression and privacy – otherwise they may have to scrap the bill entirely.

The Online Safety Bill is a legislative proposal that attempts to moderate the internet by establishing duties of care – that is, binding and enforced legal obligations – for online services. Practically, these duties of care de facto compel search engines, social media and other services focused on user-generated content to censor online content that the government deems harmful. The trouble is that to minimise “legal but harmful” content across internet services, the Online Safety Bill undermines the free speech of the UK’s many diverse communities and compromises the digital safety of UK users. By altering the language of the bill to moderate only unlawful content and by excluding private messaging services from the bill’s scope, the Online Safety Bill may be able to do what it says on the tin.

The “legal but harmful” language, defined as content that is not necessarily illegal but “presents a material risk of significant harm to an appreciable number of” children or adults in the UK, is too vague to foster responsible moderation practices. At best, this language will confuse companies over what content they should proactively prevent and remove as they struggle to consistently apply a subjective standard. At worst, it could motivate online services to over-moderate large swathes of valuable online content – like conversations about police practices, consent and civil rights – for fear of hefty fines and criminal charges for their staff.

The bill covers more than just social media and search engines, extending its reach into internet messaging services like Signal, WhatsApp and iMessage, which use end-to-end encryption to protect their users’ privacy. End-to-end encryption ensures that nobody other than users messaging each other – not even the online service – can read their messages. Historically, these platforms have been used by people, including dissidents, human rights activists, LGBT+ youth and abuse survivors, to stay safe from threats of persecution or violence. But as the bill covers these private messaging platforms, they would be required to prevent illegal content on their services, an impossible requirement for services that use end-to-end encryption.

As a result, the bill would compel these private messaging services to weaken their encryption protections, begin client-side scanning, or create backdoors that foreign adversaries and bad actors could exploit in cyber attacks. Ironically, by weakening and removing the encryption these marginalised communities once relied on, the legislation would make users’ personal information more vulnerable.

The bill needs significant amendments in order to balance the preservation of free speech, removal of problematic content, and protection of UK users. The following policy recommendations could fix the bill.

Give a gift subscription to the New Statesman this Christmas from just £49

Policymakers should replace the legal but harmful language within the bill by moving certain types of content from the lawful category to the unlawful one. This way, they’d be able to ensure that free speech isn’t overly moderated by online services. By only requiring platforms to remove unlawful content and by adopting a model that holds users – not platforms – liable for unlawful speech, the bill will ensure responsible content moderation by users and services.

Take, for example, how the bill criminalises cyber flashing. The Online Safety Bill clearly defines cyber flashing as “intentionally send[ing] or giv[ing] a photograph or film of any person’s genitals to another person” to cause harm, distress or humiliation. Already the bill inserts this definition into the Sexual Offences Act, easily categorising it as an illegal and harmful act. Scrapping the bill’s legal but harmful provisions and replacing them with a targeted list of illegal acts would force a real conversation between the public, our lawmakers and civil society about what harmful online acts should be against the law.

Similarly, removing encrypted communication from the bill’s scope would also help preserve private free speech and the choice of users to have secure communications. When the safety of groups like LGBT+ youth, activists and journalists is at stake, an online service should never be compelled to compromise privacy. By excluding messaging services from the list of covered user-to-user services, the bill will help to protect UK users’ right to private communications online.

With free expression and user safety at stake, our policymakers should focus on getting its world-leading legislation right the first time.

[See also: What buying my daughter’s school uniform taught me about gender norms]

Content from our partners
How the UK can lead the transition to net zero
We can eliminate cervical cancer
Leveraging Search AI to build a resilient future is mission-critical for the public sector

Topics in this article : , ,