New Times,
New Thinking.

Democracy in the era of deepfakes

In October a fabricated clip of Keir Starmer swearing at staff was shared. Imagine the impact if it had happened just before an election.

By Katie Stallard

In March 2022, three weeks into Russia’s full-scale invasion of Ukraine, Volodymyr Zelensky appeared in a video looking pale and weary. Standing behind a lectern in his familiar khaki fatigues, the Ukrainian president told his soldiers to lay down their weapons and surrender. Except that he didn’t. The video was a deepfake – a fake clip generated by artificial intelligence (AI) – that was quickly taken down by the major social media platforms. Zelensky dismissed the crude effort as a “childish provocation”. But it is not hard to see how a more sophisticated rendering – particularly if it had been combined with a communications blackout in Kyiv preventing the real Zelensky from correcting the record – could have altered the course of the war.

This was not an isolated incident. Ahead of the 2023 Turkish presidential election, Recep Tayyip Erdoğan promoted a video that appeared to show his main rival, Kemal Kılıçdaroğlu, being endorsed by the Kurdistan Workers’ Party – a designated terrorist group in Turkey. Again, the video was fake. Kılıçdaroğlu pointed out the manipulation, but the clip had already circulated widely. Erdoğan won the election in a run-off and embarked on his third decade in power.

There has also been a notable rise in political deepfakes in the US. In June the team of the Florida governor and Republican presidential hopeful Ron DeSantis released a campaign video purporting to show Donald Trump hugging and kissing Anthony Fauci. Fauci served as the chief medical adviser to the White House and became a bogeyman in conservative media over his handling of Covid. The advert combined authentic footage of Trump and Fauci with images that appeared to be AI-generated. Joe Biden has also been the target of deepfakes, including a video released in February that showed him announcing a national draft of Americans to fight in Ukraine, which resurfaced after the start of the Israel-Hamas war in October.

The situation will get worse. In 2024 there will be a collision between this rapid proliferation of generative AI technology and the myriad major elections due around the world, including in India, Indonesia, Taiwan, Russia, the UK (probably) and the US. Almost four billion people – more than half the world’s population – are expected to have the chance to vote in what will be the biggest election year in history. We are about to find out what happens when disinformation and conspiracy theories are supercharged by increasingly advanced and accessible deepfake technology.

It may already be too late to do much about it. “We do need to sort this out,” said Brad Smith, the president of Microsoft (which is the largest investor in OpenAI) at an event in London this summer, “by the beginning of the year, if we are going to protect our elections in 2024.”

[See also: The end of panda diplomacy]

Deepfakes emerged from pornography. The term first appeared in the online chat forum Reddit in 2017, in a post about the trend of using AI to superimpose celebrities’ faces on to porn actors’ bodies. A study of almost 15,000 deepfakes in 2019 by the AI company Deeptrace found that 96 per cent of the clips were pornographic in nature, with 99 per cent of those featuring female celebrities.

Give a gift subscription to the New Statesman this Christmas from just £49

Yet the manipulation of images – particularly those involving women and sex – is hardly new. In a recent essay Daniel Immerwahr, a professor of history at Northwestern University in Illinois, warned against catastrophising the dangers of deepfakes, citing a long, tawdry history of new media technologies being used to produce fake images. As far back as the 18th century, the decentralisation of printing presses resulted in a deluge of pornographic pamphlets about the imagined proclivities of Marie Antoinette. Photographs taken during the American Civil War turned out to have been staged, while the Soviet dictator Joseph Stalin had a penchant for erasing his political enemies from earlier photographs. Access to photo-editing software and the internet in the 1990s democratised image manipulation and distribution, which today has been magnified exponentially by social media and AI. Still, Immerwahr argues, for all the doom-mongering, there has not been a single example of a deepfake duping voters in any meaningful way.

Not yet. Others are less sanguine about the dangers ahead. The volume of deepfakes has risen sharply, while the costs of producing them are plummeting. A clip that could have cost $10,000 to produce just a year ago can now be made for less than the price of a latte.

Deepfakes have also become more realistic. A 2022 study by the Rand Corporation think tank found that between 27 per cent and 50 per cent of those surveyed could not tell the difference between an authentic video about climate change and a deepfake, with the proportion significantly higher among older adults – also the group most likely to vote.

[See also: Is Nikki Haley a real threat to Donald Trump?]

The rise of deepfakes is taking place amid a catastrophic loss of trust in politicians and public institutions, and at a time when cutbacks at social media companies have hit content moderation jobs. Twitter, now X, dismantled its Trust and Safety Council (and reinstated Donald Trump’s account) after Elon Musk took over in October 2022.

AI-generated content may not have deceived significant numbers of voters yet, but the potential for it to do so is evident in the way text-based lies have already warped political discourse. In the US over the past decade, for instance, the 2016 “Pizzagate” conspiracy theory held that members of Hillary Clinton’s campaign team were running a paedophile ring from the basement of a Washington DC pizza restaurant. Though debunked, Pizzagate then evolved into QAnon, an even more preposterous fantasy that depicts Trump fighting a cabal of powerful, Satan-worshipping paedophiles that had supposedly taken over the US government. Then there was Trump’s false claim to have won the 2020 election, which prompted the US Capitol attack on 6 January 2021. All this without the addition of convincing audio or video fakes.

It is entirely possible to envisage how a well-timed deepfake could have a real impact on next year’s elections. An AI-generated video of Biden suffering a health crisis, or demeaning voters of colour, released on the eve of the ballot, for instance, could depress turnout or cause voters to switch their allegiance. So could a clip of a presidential candidate in Taiwan purportedly discussing plans to provoke war with China or sell out the territory’s vital semiconductor industry. There have already been anti-Muslim riots in India over fake videos depicting attacks on Hindu worshippers; it’s easy to imagine how a similar incident could trigger political violence in the run-up to the election there in April. In October a clip of Keir Starmer supposedly swearing at his staff was shared online. Imagine the impact of a similar video surfacing right before the election, showing the Labour leader discussing plans to increase immigration or rejoin the EU.

But the most damaging impact this technology could have on our democracies may be what the philosopher Joshua Habgood-Coote has called the “epistemic apocalypse”. In an era of increasingly sophisticated deepfakes, disinformation and conspiracy theories, perhaps the greatest challenge will be preserving the belief in the existence of objective truth at all.

[See also: OpenAI has executed its turbulent priests]

Content from our partners
Putting citizen experience at the heart of AI-driven public services
Skills policy and industrial strategies must be joined up
How the UK can lead the transition to net zero

This article appears in the 07 Dec 2023 issue of the New Statesman, Christmas Special