The TikTok algorithm has near-mythic status. No one really knows how it works but it will figure out your deepest content desires in seconds. It drives much of TikTok’s success and the mythology surrounding it — that no matter who you are or what you like, you will be able to find it almost instantly on the app. Little is said about that content’s quality, or what the costs are of serving up personalised content at an unprecedented, unrelenting rate.
This week, a report suggests what some of those costs might be. In a number of experiments, the online trust-checker Newsguard found that TikTok users were shown “false or misleading” information about the war in Ukraine within 40 minutes of signing up to the app, regardless of whether that user ran any relevant searches on the platform. Newsguard report found that when they searched for generic terms relating to the conflict (such as “Ukraine”, “Russia” or “Kyiv”) these users would find disinformation in the top 20 suggested search results.
The false claims included videos sympathetic to both sides of the conflict. Some falsehoods “peddled by the Kremlin” included that the war was being faked, that Ukraine was led by a neo-Nazi junta and that the US has bioweapon laboratories in Ukraine. The pro-Ukraine messages included a false claim that US forces were “on the way”, that Putin was “photoshopped” into footage of a press conference he gave on 5 March to hide the fact that he was not in Moscow, and footage of the “Ghost of Kyiv”, a pilot alleged to have shot down six Russian jets (the clip was actually from the Digital Combat Simulator video game). The tests by the Newsguard team were run by simply signing up to TikTok and watching any Ukraine-related videos in full and, for the second set of tests, searching for Ukraine-related terms.
This report shouldn’t come as much of a shock. Since the beginning of the war, TikTok has been flooded with false information relating to Ukraine (the total amount of content on the app related to Ukraine outpaces all other platforms). And, of course, it didn’t begin here: TikTok has had rampant misinformation problems over the past few years, from Covid-19 to the Capitol Hill riot to QAnon.
Some work has been done to try to curb the spread of misinformation on TikTok. On 10 March Joe Biden spoke to some of the biggest American TikTokers to brief them on the facts of the conflict, while many creators on the app have done useful debunks of viral misinformation (such as one influencer who noticed how several Russian TikTokers were parroting the same Putin-backed messages). TikTok itself has taken some steps against the spread of disinformation using algorithms and human moderators to comb through hashtags and trending topics, as well as adopting “increased safety measures” and adding digital literacy tips to its Discover page.
However, it’s clear that these measures are not enough to do much damage to the onslaught of false information and it seems likely that, to truly stop this from continuing, TikTok would have to make major changes to its algorithm. The algorithm has its obvious upsides and, most of the time, it will be harmless. It seems unlikely that, for now, TikTok will have much incentive to do more than make small adjustments. In moments like this, however, the costs become clear.