New Times,
New Thinking.

  1. Comment
23 May 2024

Deepfake technology endangers us all

It's not just politicians and global celebrities who suffer the consequences of AI-generated images: they can affect anyone with a social media presence.

By Sarah Manavis

The past year has been a wake-up call about the prevalence and sophistication of deepfakes. Be it the fake porn created using Taylor Swift’s likeness that spread across social media, or deepfake audio of Sadiq Khan speaking about the war in Gaza, AI-generated content is becoming more convincing – and dangerous. In what looks to be an election year in both the US and the UK, the threat such images pose to our democracy feels more tangible than ever (deepfakes of Joe Biden and Donald Trump are everywhere – both Rishi Sunak and Keir Starmer have already been targeted).

Politicians and global celebrities are the people we spend the most time saying are at risk of deepfakes. But another demographic is being targeted more than any other: social media influencers, particularly women. When the social media agency Twicsy conducted a survey of more than 22,000 influencer accounts on Twitch, TikTok, Instagram, YouTube and Twitter/X in March, they found that 84 per cent had been the victims of deepfake pornography at least once (89 per cent of the deepfakes found were of female influencers). These weren’t small accounts – each had a five-figure follower count. And in the space of just one month, some of these deepfakes had received more than 100 million views.

Influencers make good subjects for deepfake technology. They upload thousands of images and videos of themselves in short spaces of time, often from multiple angles (you only need one high-quality image to create a convincing deepfake). They speak in similar cadences to one another to fit algorithmic trends, meaning their voices can be straightforwardly mimicked. They might use filters that leave them looking smoother and more cyborg-esque than any person you’d encounter in real life. And there is a litany of apps available for anyone to download to create deepfakes – such as HeyGen and ElevenLabs – which only require users to upload a small number of images in order to make something that looks very real.

Influencers also make easier targets than your average celebrity. While there is also a wealth of images of, say, pop stars and athletes, these public figures typically have the money and resources to be litigious about deepfakes. Comparatively, influencers have limited means to do anything about the videos and images created using their likeness. Platforms, too, are far more likely to respond to celebrity deepfakes than of less famous individuals. When the pornographic deepfakes of Swift went viral on Twitter earlier this year, the site blocked all searches of her name, stemming the spread almost immediately. It’s difficult to imagine the reaction would have been the same for an influencer with only a few thousand followers.

Deepfake pornography is likely the most concerning problem for famous women. But this technology can be used for many other nefarious purposes beyond the creation of humiliating sexual content. Influencers’ likenesses are now increasingly used to create fake advertisements to sell dodgy products – such as erectile dysfunction supplements – and to push propagandist disinformation, such as the deepfake of a Ukrainian influencer praising Russia.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.

Even beyond using deepfakes of already-popular influencers to make ads they didn’t agree to, we are also starting to see how – via a scrapbook of images of multiple media figures – tech entrepreneurs can build whole new fake influencers, created entirely via generative AI. These accounts are full of hyper-realistic, computer-generated images, where the fake influencers talk about their fake hobbies, share their fake personality quirks, while securing very real and lucrative brand deals. Some have gained hundreds of thousands of followers and generated thousands for their male creators every month. Entrepreneurs can also fabricate deepfake influencers who embody sexist stereotypes of the “perfect woman” to appeal directly to male audiences, who could become more popular than the real, human influencers whose likenesses were used to make them.

Of course, this impacts many more people than just social influencers, affecting the livelihoods of anyone who does creative work – be it those who make music, art, act or write. Just this week, the actor Scarlett Johansson claimed OpenAI, the tech organisation that builds ChatGPT, asked to use her voice for a chatbot and, after she declined, mimicked her voice anyway. OpenAI pulled the voice but claimed it was not an imitation of Johansson.

It’s easy to vilify influencers for being shallow and attention-seeking, promoting over-consumption, and narrow beauty standards. But this trend shows us the danger deepfakes (and other forms of technology that could be used misogynistically) present for all of us – especially women. Anyone who has shared any image of themselves online is now at risk of having a deepfake made of them by anyone with malicious intent and internet access. If there is any digital representation of you online – an image, something as common as a Facebook profile picture, or even a professional headshot for LinkedIn, or a video; even your voice or your written work – then you are susceptible. This reality should help us to see why deepfake technology needs immediate legislation – holistic, wide-reaching laws that address the risks deepfakes pose to all of us.

[See also: What we must learn from the infected blood scandal]

Content from our partners
The power of place in tackling climate change
Tackling the UK's biggest health challenges
"Heat or eat": how to help millions in fuel poverty – with British Gas Energy Trust

Topics in this article : , ,