New Times,
New Thinking.

  1. Comment
31 January 2024

The danger of deepfakes goes far beyond Taylor Swift

Why has it taken an attack on a global star for politicians and regulators to get serious about protecting women from deepfake porn?

By Sarah Manavis

Why do people create deepfake pornography? To arouse, to deceive, to manipulate? Perhaps all of the above. But mostly, it is designed to humiliate. Deepfakes – which convincingly superimpose an individual’s face on to a photograph or film, often pornographic – are an attempt to disempower and embarrass their subjects. They are typically made by anonymous internet users, and are often of women they personally know. Almost anyone is a potential victim, as long as there are enough photographs and videos of them circulating online. Images are easily manipulated using free websites and apps, without consequences for their creators, and pornographic deepfakes are used as a revenge tactic against outspoken, feminist, or otherwise successful or high-profile women.

It felt inevitable, then, that a series of pornographic deepfakes of Taylor Swift flooded X, the platform previously known as Twitter, in late January. The images were, for a short period, hard to miss, even causing “Taylor Swift AI” to trend on the platform. (Like much AI-generated porn, the images were obviously fake.) A viral post sharing one of these deepfakes – from a verified account – received more than 45 million views, 24,000 shares, and hundreds of thousands of bookmarks and likes before the user had their account suspended, 17 hours after posting. A report from 404 Media, a news outlet that covers technology, traced the posts to a Telegram group where members share sexually explicit images of women made via AI, some using Microsoft Designer, a free image generator.

The posts were in violation of both the platform’s media policy and, as X wrote in a statement, “a zero-tolerance policy towards” content containing non-consensual nudity – but they were slow to be taken down. In an apparent attempt to suppress the spread of the images, X temporarily made Swift’s name unsearchable on the site. Despite this intervention, and while many posts and accounts have now been removed or suspended, the images continue to proliferate on the site, reshared by smaller accounts. In response, X users – not just Swift fans – have been vocal about the dangers of the platform and, more generally, how few protections there are for the victims of deepfake technology.

In recent months a growing number of posts containing misinformation or explicit images have been shared on X, despite its own rules prohibiting them (such as those relating to the Israel-Hamas war, about which the EU has formally opened an investigation into X). But this time, users’ outrage has not been ignored. A spokesperson for President Joe Biden issued a statement describing the images as “alarming”, and the White House press secretary, Karine Jean-Pierre, encouraged Congress to legislate on the issue. She said social media platforms have “an important role to play in enforcing their own rules”. The CEO of Microsoft, Satya Nadella, gave an interview to NBC News in which he said the company needed to act and that “it behoves us [all] to move fast on this”. Both sitting Democrats and Republicans have since called for urgent legislation and Big Tech regulation on deepfakes to establish effective safeguards.

This is a proportional response to an alarming trend. It also reflects a growing resolve and savviness from politicians on regulating the internet. In the US, in autumn 2021, Facebook executives faced congressional hearings on the potential harm the platform poses to teenagers’ mental health (40 states have recently filed lawsuits against its parent company, Meta). In the UK, the Online Safety Act, passed in October 2023, criminalised sharing (real or fake) pornographic images without the consent of those pictured.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Are politicians finally waking up to the unruliness of the internet? It may be tempting to think that the days of sluggish governments allowing technology to outpace legislation are over. But it is striking that it has taken a scandal involving Taylor Swift to provoke an urgent response. Language such as “moving fast”, “alarm” and “acting now” suggest deepfake pornography hasn’t been an endemic issue online for nearly a decade. There are already dozens of popular, well-established porn sites specifically dedicated to explicit content using celebrities’ likenesses. But deepfakes don’t just affect celebrities – they also involve ordinary people with far less power, money and support than Swift, and whose videos will be less easily dismissed as fake.

While some politicians in the US and UK have grappled with the effects and dangers of deepfake technology, this is the first time there has been a critical mass of policymakers treating it with appropriate urgency. The timing is relevant: 2024 is an election year in the US. Recent polling conducted by Redfield & Wilton Strategies for Newsweek found Taylor Swift’s endorsement could sway nearly a fifth of American voters; the New York Times reported that Swift was the Biden campaign’s number one endorsement target.

Deepfakes aren’t new, and many more women than Taylor Swift are victims of them. And deepfake pornography will only become more realistic. Without effective regulation, as the technology advances and we leave the uncanny valley, it will soon become impossible to determine what’s false and what isn’t. We should be sceptical, however, of policymakers who only now, in this defining election year, are suddenly eager to broadcast their concern about a problem that has for too long been ruining ordinary women’s lives – and will continue to, if governments do not act.

The danger of deepfakes goes far beyond any harm they cause to Taylor Swift.

[See also: Are you ready for Elon Musk to read your mind?]

Content from our partners
An energy skills boost can power UK growth
Homes for all: how can Labour shape the future of UK housing?
The UK’s skills shortfall is undermining growth

This article appears in the 31 Jan 2024 issue of the New Statesman, The Rotten State