On Friday, BuzzFeed News reported that the government was putting together a plan for an internet regulator, similar to Ofcom, that would make tech companies “liable for content published on their platforms” and would give the government the power to legally “sanction companies that fail to take down illegal material and hate speech within hours.” Like with every decision taken by the government, the idea was immediately derided, with users arguing an internet regulator is a “bad idea” that could limit speech and become a barrier to innovation.
There’s a justifiable amount of paranoia about the government policing what is and isn’t allowed on the internet. Net neutrality in the United States – the idea that internet providers should give the same standard of service, in terms of speed and quality, regardless of how much a customer is paying – kicked off modern discourse about how, and when, the government should be involved in regulating the internet. Although an order requiring internet providers to provide unhindered, equal access to the internet to every one of their customers was adopted, then repealed, by the American Federal Communications Commission (essentially an American Ofcom) in 2015 and 2018, repeated attempts to pass net neutrality into law in Congress have been fruitless; with Republicans arguments that government involvement in the internet is a step in the wrong direction winning out over Democratic arguments for enshrining fairness into law.
The net neutrality debate fed a growing global suspicion that any government involvement in the internet is, and always will be, inherently bad. Concerns about government bodies involving themselves with a thing they are infamously terrible at understanding are not entirely unfounded. However, the reported government proposals suggest something different. Rather than a blanket policy, they point to a government regulator that could keep the vast majority of our internet experiences as they are today – while simultaneously managing to cut back the worst parts of being online that make the internet an often dark, dangerous place.
As the BuzzFeed piece notes, should the proposed regulator emerge, it would be a replica of what already exists in other western democracies. The proposals look similar to new laws that went into full effect in Germany in January of this year. Known as “NetzDG”, short for Netzwerkdurchsetzungsgesetz (The Network Enforcement Act) the law created real and severe consequences for internet sites that failed to monitor their content. Now, for companies with over two million active users that are for-profit and run in Germany, hate speech must be removed within 24 hours of being flagged, and if it is not removed in that time frame, the company can face financial penalties of up to €50m.
The potential financial penalty is not spelled out in the UK government proposal, but a similar tactic of requiring tech giants to remove content within a window, or be handed a significant fine, is the approach the alleged regulator would take.
While these measures may seem extreme, they merely provide the motivation for tech companies to create ways to monitor content that they should, arguably, already be doing. Tech companies have been cluelessly banning users and content with an obvious lack of oversight – ironically leading to harmless content being removed all while many users inciting violence, spamming hate speech, and sending death threats are getting off scot-free.
There have been some teething issues with the German law, such as how to deal with satirical content, cartoons etc, but the legislation is undeniably a step towards what tech giants should be doing in the first place. And introducing similar regulations in the UK would give tech companies an unavoidable incentive to start actually watching what their users post.
Although some of the policies reported in the potential regulator’s remit, inevitably, appear half-baked to say the least (see: crackdowns on advertisements for sugary foods, already managed by the Advertising Standards Authority), an internet regulator should not be feared. A body forcing tech companies to remove racial slurs, death threats, and graphic content from our social media feeds is not an obstruction to free speech or innovation, but something long-needed online. And until tech companies start taking responsibility for their content, it should be considered a welcome change.