Today (21 April) England’s former Children’s Commissioner, Anne Longfield, unveiled plans for a class action lawsuit against the social media platform TikTok on behalf of “millions” of children across the UK and the European Economic Area whose data has allegedly been illegally processed by the company.
Lawyers at Scott+Scott, who are supported by Longfield and backed by an unnamed litigation funding firm, are set to argue that TikTok “wilfully takes children’s personal information without sufficient warning, transparency or the necessary consent required by law, and without parents and children knowing what is being done with their private information”. This data, the lawyers claim, includes telephone numbers, videos, pictures, locations and biometric information used for facial recognition. TikTok is accused of being “deliberately opaque” about who has access to the data.
A spokesperson for TikTok told the New Statesman: “Privacy and safety are top priorities for TikTok and we have robust policies, processes and technologies in place to help protect all users, and our teenage users in particular. We believe the claims lack merit and intend to vigorously defend the action.” TikTok claims it only shares data with advertisers where it has consent or a lawful basis for doing so.
Speaking to the New Statesman earlier this week, Longfield said: “TikTok has clearly taken some steps which give some reassurance that they understand the importance of [child safety]. I really welcome that there are people there specifically to look at child safety and to look at how they can get the company ready for the Online Harms legislation.”
Longfield added: “But that is always going to be severely limited whilst children under 13, who aren’t consenting with the policies on the site, are having their data illegally collected, and I think that then compromises any actions they take in terms of child safety.”
The forthcoming UK Online Harms legislation is currently in the consultation phase and is expected to be introduced next year.
This is the latest episode in a growing battle between Big Tech and organisations representing the data rights and protection of children.
In July 2019, the Information Commissioner Elizabeth Denham revealed that her office had opened an investigation into TikTok. With its focus on short-form video, the Chinese-owned social network would soon surpass Instagram, WhatsApp and Snapchat as one of the world’s most downloaded apps, and find itself at the heart of the US-China trade war.
Denham’s probe, however, focused not on the geopolitical concerns surrounding TikTok, but the way in which it treated its youngest users. “We are looking at the transparency tools for children,” she said at the time. “We’re looking at the messaging system, which is completely open [and] we’re looking at the kind of videos that are collected and shared by children online.”
As scrutiny of the company intensified over the following months, TikTok started building out a high-profile European lobbying team led by Theo Bertram, a former Google executive and adviser to Tony Blair and Gordon Brown. It hired a number of policy officials and lobbyists in London, Paris, Berlin and Brussels, including a head of child safety public policy, before introducing several new safety features.
In a paid-for article in the Evening Standard marking Safer Internet Day in February this year, the company outlined what it had done to make its platform safer for users under the age of 16, from setting registered accounts to private by default, to switching off direct messaging, introducing a family account pairing service and refreshing its community guidelines “to make it clear what is not acceptable on our platform”.
But as the backlash over Big Tech’s efforts to monetise children’s data gathers pace, critics are warning that TikTok is one among many social media firms not doing enough to protect not only the under-18s, but the under-13s who use it.
One of the central arguments of Longfield’s case is that the app fails to stop under-13s, who cannot legally consent to their data being used, from accessing the app. While users are asked to provide their date of birth before signing up, there’s nothing to stop under-13s from claiming they are older than they are.
This isn’t an easy issue to solve, as forcing users to provide proof of identity to a company like TikTok would also raise privacy and free speech concerns. Similarly, forcing TikTok to use AI-systems to identify underage users, as Longfield suggested to the New Statesman, could also raise data protection concerns, as it already has in Italy.
But that doesn’t detract from the facts of the matter, which are that, despite TikTok not being legally allowed to process under-13s’ data, 44 per cent of children aged 8-12 in the UK use the site, according to Ofcom research.
While double the number of 8-12-year-olds use YouTube in the UK, Google has developed a child-friendly version of that platform called YouTube Kids. Facebook is also developing plans for an Instagram for under-13s. But while TikTok has developed a similar service called TikTok for Younger Users for the US market, it hasn’t rolled it out in the UK or Europe.
This highlights a broader point that applies not only to TikTok, but the tech sector more widely, which is that while executives have for years argued that they can be relied upon to self-regulate their platforms, they tend not to introduce safety features that would damage their bottom line unless legislators and regulators force them to do so.
It may be some time before the TikTok case proceeds. The claim has been “stayed” until a similar High Court case against YouTube and its parent company Google, which begins next week, concludes later this year.
But regardless of the outcome of either case, it’s clear that there is growing resistance from politicians, regulators and parents against tech firms’ efforts to monetise children’s internet usage; Longfield and fellow critics of TikTok hope that the risks of continuing to make money in this way will soon overshadow the rewards.