New Times,
New Thinking.

  1. World
30 May 2019updated 07 Jun 2021 4:31pm

The doctored video of Nancy Pelosi shared by Trump is a chilling sign of things to come

By Nicky Woolf

On 24 May, the president of the United States tweeted out a video purporting to show Nancy Pelosi, the democratic speaker of the House of Representatives, slurring her words, seeming almost drunk. President Trump accompanied the video with the all-caps label “PELOSI STAMMERS THROUGH NEWS CONFERENCE”.

It was taken from a Fox News report that contained one of several videos circulating portraying Pelosi in that light. They had been shared both by right-wing media outlets and widely on social media, including by Trump and his personal lawyer Rudy Giuliani. But all the videos share one common feature: they are all fakes, carefully and selectively edited to make Pelosi look infirm or intoxicated.

The fact that the video was fake didn’t stop Trump tweeting it, Fox News playing it, or social media users sharing it. And despite the backlash the video has caused, its continued spread highlights an uncomfortable truth: as videos like this get easier and easier to fake, and fakes get harder and harder to tell from real video, this is what the future of media is increasingly going to look like.

The rise of so-called “deepfake” videos has been causing alarm for some time now. Deepfakes are the product of a sophisticated set of AI and computing techniques, through which a video can be doctored to replace the subject with someone else, inserting the face and voice of almost anyone you choose and having them do almost anything you like – as shown by this video of Barack Obama, which was produced as a demonstration of the capabilities of deepfake technology.

Give a gift subscription to the New Statesman this Christmas from just £49

Their existence was first seen in pornography – an industry which has often been a powerful engine for development of new technologies, especially online. But their potential application in the world of political misinformation is immense and potentially catastrophic.

Already, the internet age has been marked by the ebbing away of trust in objective news outlets, replaced by a proliferation both of hyper-partisan sites to whom truth is less important than winning the culture war. and purely fake news sites designed to attract clicks for sheer profit. This was caused in part by the paradigm-shift in the media ecosystem that happened when social media sites such as Facebook and Twitter becoming the first port of call for information over trusted news organisations.

During the 2016 election, according to analysis by Buzzfeed News’s Craig Silverman, fake viral stories dramatically outperformed real news on Facebook, with the top 20 fake stories generating more than eight million shares, comments, and reactions. In what was called a “digital gold-rush,” a small North Macedonian town of Veles became an unlikely centre for fake news production, with more than 100 sites producing entirely made-up stories designed to attract viral clicks operating there. Many of them had hundreds of thousands of followers on Facebook, Buzzfeed News reported.

The Pelosi video isn’t technically a “deepfake”. It uses selective editing, speeding-up and slowing-down of footage that already exists, and pitch-correction of the audio track, rather than projecting Pelosi into an entirely new video, and has been called a “shallowfake”. But both the fact that the president tweeted it, and how the response to its widespread sharing played out, raise a terrifying spectre of what the future of politics and news might hold.

John Villasenor, a non-resident senior fellow in governance studies and the Center for Technology Innovation at the Brookings Institute, says that fake videos like that of Pelosi as well as more sophisticated deepfake-style videos are “clearly going to be a factor in the 2020 campaign and beyond”.

“If we look at the kind of information manipulation you saw in the 2016 campaign, this is an inevitable next step given the increasing availability of this technology,” Villasenor tells me. “So unfortunately this sort of thing is going to be a feature of the political landscape.” The problem isn’t just the direct effects of any particular fake video: it is what the proliferation of the technology will do to how we as a society understand the world around us. The rise of fake video “scrambles our relationship with the truth,” Villasenor says, “because it becomes hard to know what to believe.”

“It works in two directions,” he continues. “When we see a video, we can’t necessarily be sure that what its portraying is accurate. Fake videos are obviously a problem because they make people appear to do things they never did or said, but the existence of deepfakes also reduces our trust in actual videos that have not been manipulated. It affects our relationship between … what’s real and what isn’t.”

If video evidence of Trump or any politician committing some egregious crime emerged today, it would cause political damage in a predictable way. But soon, the development of these technologies will enable and encourage a default position: simply calling all unflattering video evidence fake, the way Trump already does with news coverage he doesn’t like. There is no downside: visual evidence will no longer be considered trustworthy. It’s a plague on everyone’s house, sure, but it will be most beneficial to nationalist politicians like Trump and his ilk, who thrive when cognitive dissonance undermines the shared civilisational concept of objective truth.

This can be seen as a potential precursor for societal calamity – an epistemological collapse. According to the philosopher Hannah Arendt, a polity losing the ability to tell truth and lies apart is a key operating requirement for the rise of a totalitarian state. “The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world – and the category of truth versus falsehood is among the mental means to this end – is being destroyed,” she wrote in her 1951 book Origins of Totalitarianism.

The Pelosi video was quickly debunked, and there are still plenty of ways to discern when a video has been manipulated or uses deepfake techniques. But the technology is getting more sophisticated on both sides. “Every time the detection technology gets better, then the tech for producing them can be tweaked so that it is better at defeating the detection tech,” Villasenor says. He describes it as “an arms-race”.

“In our polarised discourse, there is ample interest in manipulating audio and video for the sake of advancing one political stance,” Charlotte Stanton, a fellow in the Technology and International Affairs program at the Carnegie Endowment for International Peace and the director of the endowment’s Silicon Valley office, tells me. “Even though this example doesn’t use AI – it’s not a deepfake – it still gives us a sense of how realistic false content can be. I watched it and couldn’t tell it had been distorted even though I had been told it was.

“When you have Trump and Giuliani passing on [that video] to a very large audience, it shows that it is going to be a very effective tactic for people who want to sow disinformation,” Stanton adds.

Part of the problem is that social media companies like Facebook have long been structurally resistant to the idea of being a publisher, as opposed to a platform – that is, to the idea of applying any more than the legal minimum of human editorial judgment to the content that their users share. YouTube did eventually remove the doctored Pelosi video, but Facebook and Twitter did not. “Most of us look at the Pelosi video and think, ‘that’s wrong’,” Stanton says. “But because there aren’t regulations for what media and disinformation is wrong, the platforms have to take this on themselves – and they don’t feel comfortable taking this decision.”

To some extent, this is understandable: for a company like Facebook, even trying to design guidelines that would control for fake content like this in a way that wouldn’t also capture, for example, satire, is a near-impossible challenge – let alone implementing such policies to the vast amount of content produced by billions of users every day.

But that means that our information ecosystem appears to be in an inexorable slide towards a situation where it may become next to impossible to sift truth from lies. When people can no longer believe their eyes, the result could be catastrophic. The framework for this disaster is already in place, the technology is just around the corner, and many politicians – including the president – are only too ready to take advantage of the new reality. Whether democratic society will be able to adapt and deal with it is anybody’s guess.

Content from our partners
How to solve the teaching crisis
Pitching in to support grassroots football
Putting citizen experience at the heart of AI-driven public services