New Times,
New Thinking.

  1. Comment
5 February 2025

Why is it still legal to make deepfake porn?

Criminalising the use of a woman’s image without her consent shouldn’t be a complex issue.

By Zoë Huxford

In 2017 the Reddit user u/deepfakes shared a pornographic video they had made. Hardly a rare occurrence, but this particular image was unusual: one person’s face had been superimposed on the body of another to create a digitally altered chimera. Others could do this too, u/deepfakes explained, by using Google’s open-source deep-learning library and images of a celebrity. Soon, the internet was awash with “deepfaked” pornographic content as women had their likenesses used in sexually explicit images without their consent. Taylor Swift, Alexandria Ocasio-Cortez and Giorgia Meloni are among high-profile victims, but last year, as the New York Times reported, schools across the US saw a rise in teenagers making deepfake nudes of their female classmates.

In Britain it has been illegal to share or threaten to share deepfakes since 2023, but creating the content itself is not. Baroness Charlotte Owens, the youngest ever peer, is among the activists seeking to criminalise the creation of pornographic deepfakes. Women, she says, “have lost the ability to choose who owns sexually explicit images of them”. She is right to point to the phenomenon’s gendered aspect: research shows 98 per cent of deepfakes are sexually explicit, and 99 per cent of those are of women and girls. The inception of deepfake technology saw the rise of a community built around the violation of female autonomy.

Successive governments have committed to legislating against the production of deepfakes (Rishi Sunak in April 2024, Keir Starmer in January 2025). Labour’s 2024 manifesto pledged “to ensure the safe development and use of AI models by introducing binding regulation… and by banning the creation of sexually explicit deepfakes”. But what was assured in opposition has been slow to materialise in power – the lack of legislative detail was a notable omission in the King’s Speech. It’s clear that generative AI has rapidly outpaced current legislation and that urgent action is needed to address the hole in the law.

Hardly anyone appears to object to criminalising the production of deepfakes. But there is disagreement over how to do it. Owens and her fellow campaigners are advocating for what’s known as a “consent-based approach” in the legislation – it aims to criminalise anyone who makes this content without the consent of those depicted. But her approach was deemed incompatible with Article 10 of the European Convention on Human Rights (ECHR), which protects freedom of expression. Her bill was ultimately not backed by government.

The government’s original plans to criminalise the creation of deepfakes contained a stipulation to sidestep this problem: it would be necessary to prove that the perpetrator intended to cause “alarm, humiliation or distress” to the victim in order to convict. But this so-called intent-based approach is insufficient. Campaigners argued that having to prove malicious intent would make it much harder to convict if the culprit were ever brought to trial. Moreover, there is clearly a fairer option available: centring the consent of the victim not the intent of the producer.

In a victory for Owens and other campaigners, the government has moved away from the intent-based approach and the high burden of proof it entailed. But why were ministers so hesitant to put consent at the heart of their war against deepfakes in the first place? And what should we think of lawmakers who suggest the production of faked pornographic images of women is somehow a free speech issue? Have they noticed the ECHR also prohibits people being subjected to “inhuman or degrading treatment”?

The speed at which AI develops, combined with the anonymity and accessibility of the internet, will deepen the problem unless legislation arrives soon. All that is necessary to create a deepfake is the ability to extract someone’s online presence and access software widely available on the internet. It is not difficult.

Subscribe to The New Statesman today from only £8.99 per month

The horrifying story of Gisèle Pelicot – whose husband organised her drugging and serial rape by more than 50 men over the course of several years – last year reminded us of the banality of such evil. There was no secret den of iniquity. In fact, the seedy underbelly of society looked rather quotidian: the accused included a civil servant, a prison guard, a nurse. The world of deepfakes is just as ordinary and even more anonymous. Tales have emerged from its victims who discovered their perpetrators knew them directly: a classmate, a family member, a (male) best friend.

As the government drafts legislation, it should focus on consent. This would make it exceedingly difficult for perpetrators to find legal loopholes; to break women’s bodily autonomy; to obfuscate the idea that no means no. It would circumnavigate the pernicious victim-blaming mentality that contaminates the legal system. And it would help women to exist on the internet without fear of deepfakes, without fear that a person who made a deepfake would be exonerated in a court of law.

Deepfakes should be called what they are: sexual abuse material. That a woman did not consent to her likeness being used in such a way should be enough to criminalise the act. It needn’t be more complicated than that.  

[See also: The Do No Harm dilemma]

Content from our partners
Chelsea Valentine Q&A: “Embrace the learning process and develop your skills”
Apprenticeships: the road to prosperity
Apprenticeships are an impactful pathway to employment

This article appears in the 05 Feb 2025 issue of the New Statesman, The New Gods of AI