If in the past decade social media has operated as a rambunctious free-for-all, in the coming years governments will attempt to bring a semblance of order to online spaces. That is either a relief or an egregious threat to civil liberties, depending on one’s ideological predisposition.
The UK government’s Online Safety Bill, which is approaching the pre-legislative scrutiny phase, is an example of the law scrambling to catch up with the febrile atmosphere of social media. According to the government, the bill’s aim is to make Britain “the safest place in the world to be online”.
When the bill becomes law, which is expected to be this year or next, social media platforms will have a “duty of care” to protect users from harmful content. Companies that fail to remove such content will face fines of up to £18m or 10 per cent of their annual global turnover, whichever is the higher.
There are legitimate concerns about the bill. The Conservative backbencher and civil liberties campaigner David Davis has described it as a “censor’s charter”; one weakness of the proposed law is how vague it is. Social media companies will be required to remove content that may be legal but that the government considers “harmful” – a subjective term.
Perhaps more concerning is how the bill would potentially allow ministers to modify the code of practice Ofcom uses to protect customers so that it “reflects government policy”. This risks undermining the regulator’s independence. Not that the government appears particularly worried about that: it recently attempted to appoint the former Daily Mail editor Paul Dacre to the role of Ofcom chair. The nomination of such a deeply ideological figure – and someone who is close to the government on plenty of cultural issues – would risk inadvertently politicising the role.
With that in mind, I am inclined to side with critics of the bill. Like them, I do not wish my free speech to be potentially curtailed because it offends the values of the Daily Mail. But critics of the bill would do well to acknowledge that the online status quo has become untenable. Harassment and abuse are rife. I get my fair share of it: in our “anti-elitist” culture we journalists – together with pretty much every other public figure, big or small – are considered fair game.
This is fine up to a point: if you put yourself out there it is inevitable that some people are not going to like you. But reassuring homilies about “sticks and stones” downplay the emotional impact that sustained online abuse can have on its recipients.
At present, social media empowers harassers, stalkers and various other creeps. Existing laws against harassment and hate speech can be less effective when transposed to the online environment; when a social media account gets taken down, abusers can set up multiple new anonymous accounts with ease. It is therefore right for the government to put pressure on tech companies to stamp out such behaviour.
The balance between civil liberties, privacy and the right not to be hounded by online tormentors can be a complicated affair. Sometimes even well-intentioned regulation can have unintended consequences. The European Union’s General Data Protection Regulation (GDPR) laws are often cited by privacy campaigners as a legal bulwark against the panopticon of Big Tech. Yet GDPR can make it harder to assist victims of so-called revenge porn, one of the most egregious violations of privacy imaginable.
Facebook recently introduced a pilot scheme that uses advanced AI technology (the site’s AI facial recognition is supposedly more accurate than the FBI’s) to remove intimate images shared by abusers on its platforms before they have been uploaded. It allows potential victims to upload images confidentially if they fear that somebody else may share them. The images are then hashed by Facebook’s algorithm before being deleted. Any user who subsequently tries to share the images is pre-emptively blocked from doing so.
Unfortunately Facebook’s algorithm is not as effective at stopping revenge porn as it might be. That can be traced in part to GDPR, which prohibits tech companies from looking inside personal messages (ie Instagram DMs and Messenger). The definition of electronic communications under EU law changed in 2021 to include messaging services and put them under the 2002 ePrivacy directive – but unlike GDPR, this does not include measures to detect child sexual abuse. Since these mediums are often used by perpetrators to send intimate images to friends, relatives and intimate partners of the victim, this means that a lot can slip through the net.
This is not to say that Facebook should be free to snoop arbitrarily on its users. Social media platforms already have a disconcerting habit of censoring users based on whichever way the political winds are blowing. YouTube content creators have experienced “demonetisation” for airing controversial views. Sometimes this is the right decision; however, all such decisions are arbitrary and unaccountable.
Critics of the government are right to warn that the Online Safety Bill risks giving us the worst of both worlds: politicised regulation from the government to accompany what we already have from Big Tech. But it would take a particularly naive view of capitalism to wish to leave things as they are: to tacitly accept vast corporations as better arbiters of abusive content than democratic governments. We do need an enforceable set of rules to govern online spaces. What we don’t want is for the cure to be worse than the disease.