Be it misinformation, harmful content, bullying or harassment – if a social media platform has a serious problem, you can rely on those with the power to regulate and hold that platform accountable to do effectively nothing about it. This has been the lesson of the last decade. Since the mid 2010s, with the rise of online radicalisation, disinformation and the Cambridge Analytica scandal, the negative influence of social media on our culture has become inarguable, and there has been a clear need to have strict laws and regulations around what gets shared online and how our data is used by platforms. And yet politicians and governments have failed to intervene – offering light-handed, belated legislation (rapidly outpaced by platforms’ technology) at best.
The same explanations are given every time: that those in power underestimate the harm social media can cause and don’t understand how these platforms even work, or worse – they have conflicting interests that discourage them from making enemies of Big Tech. It has become widely accepted that platforms will only change as a result of bad PR and public pressure, motivated by their profit margins rather than by the watchful eye of regulators.
Yet could the status quo finally be changing? On 24 October more than 40 US states filed lawsuits against Meta, the parent company of Facebook and Instagram, arguing that it engaged in a “scheme to exploit young users for profit”, misled young users about how their data would be used and made its platforms deliberately addictive for children.
These lawsuits are the first of their kind against a social media company. They take a dual approach. Firstly, they argue that Meta knew it was creating a product that was both addictive and harmful for children. This claim follows on from the Facebook Papers, leaked by the whistleblower Frances Haugen in 2021, which claimed among many other chilling details that Meta knew its platforms were causing mental health problems for younger users, specifically teenage girls. The leak included evidence that an internal study found that Instagram was causing self-esteem issues among this demographic, but that the platform still kept serving them harmful content, such as posts promoting eating disorders. The US lawsuits argue that this violates consumer protection laws that have increased safety rules for under-18s, laws which also exist in a similar form in the UK.
The lawsuits also argue that Meta platforms in violate a long-standing American data privacy law, the Children’s Online Privacy Protection Act, which prohibits tech companies from collecting data on children under the age of 13 without parental consent. For years, almost every social media platform – including those outside the Meta umbrella, such as TikTok – has avoided engaging with this law by claiming all users must be at least 13 years old to create an account (if a child manages to skirt those rules and create an account, companies claim they will remove them as soon as they are aware). But on most platforms it’s all too easy for under-age users to get past the very low barriers to entry, and to go undetected for too long. These lawsuits aim to force Meta to dramatically improve its age verification systems and to increase measures that prevent young children from accessing its platforms. They also expose Meta to large fines, and forcing the company to make big changes to how its sites function would undoubtedly send a message to other platforms with young users, such as TikTok and YouTube.
Such large-scale legal action by dozens of states in cooperation is extremely unusual – unprecedented when it comes to Big Tech. These lawsuits are similar to other historically significant ones, however, such as those against tobacco companies and Big Pharma. Meta also appears more concerned about this than past government action. Andy Stone, a Meta spokesperson, described these suits as a “political stunt” and said the allegations were “false and demonstrate a deep misunderstanding of the facts” – far harsher language than the company would usually use.
But it may be difficult for these US states to achieve their aims. In other historic lawsuits against unregulated companies there has been a wealth of clear and direct evidence of harm (such as the health risks of smoking or the damage caused by opioid abuse). The harms caused by social media may be more subjective and harder to pinpoint. Another major hurdle the lawsuits face is one that is likely to come up in any attempt to regulate the internet: things that are harmful for some users on some platforms may be useful and harmless to some users on others. Features such as push notifications and visible like counts exist on essentially every website and smartphone app in the world, and aren’t always going to be a problem (even if there is evidence that they can become addictive or affect user self-esteem). It will be hard to make the case that there should be laws against some of these features, even if it were to improve the mental health of millions of teens.
There are tall hurdles for the states to clear if they are to succeed. But regardless, this signals a serious and significant shift in how those in power are treating Big Tech. For the first time we’re seeing a major government address problems in a way that could make a substantial difference to how the world’s biggest social media platform operates. Even if only a portion of these lawsuits’ claims were addressed by Meta, it could considerably reduce the risk of harm caused to teenage users, and could shield children from accessing these sites too young. These lawsuits are not the only action we’re seeing from the US government against Meta either: the Biden administration is also looking into the company’s child safety record, and the Federal Trade Commission is putting together a plan to keep Meta from monetising the data it collects from children.
These lawsuits send a message not just to Meta, but to every major social media company. They warn that the era of platform appeasement – which looked like it would go on forever – may be coming to an end. After years spent tiptoeing around these tech giants, governments are finally waking up to the scale of the damage caused by social media, and finally using the power they possess to stop it.