Reviewing politics
and culture since 1913

  1. Comment
8 January 2026

Elon Musk’s Grok must stop facilitating abuse

Ofcom and the government could stop X’s AI engine posting sexualised deepfakes. Will they?

By Clare McGlynn

Grok, X’s AI chatbot, has been digitally stripping women in response to prompts from users. Images of naked, non-consenting women had been circulating with impunity on the platform for weeks, until Grok was asked to generate images of the Princess of Wales in a bikini, which it duly did. Finally, the UK regulator Ofcom announced it had made “urgent contact” with Elon Musk’s company xAI. Why did we have to wait for such a high-profile woman to be targeted before the UK authorities acted, as appears to be the case? And why had X not acted sooner to remove the offending images, close accounts of abusers and stop Grok fulfilling requests to produce this content?

Users have uploaded women’s ordinary images to X and asked Grok to undress or sexualise the people in their images, with Grok complying. Reuters calls this a “mass digital undressing spree”. Grok even sexualised images of children, though these have now mostly been removed. “We’ve identified lapses in safeguards and are urgently fixing them,” Grok wrote in an X post last week.

None of this should come as a surprise. Elon Musk’s AI chatbot was designed to have fewer “guardrails” than its competitors. X has said in a statement that “we take action against illegal content on X… by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary”, and that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content”. Yet when Grok’s video functionality was first launched last summer, it produced nude videos of Taylor Swift. It’s also been accused of facilitating harassment by fulfilling user requests to put semen-like substances over women’s faces; the abusive image of one woman was reportedly viewed 30 million times on X. The spree of nude-image generation at the start of this year is only its latest wave of sexual harassment.

Keir Starmer responded to the furore. As reported by Bloomberg, Starmer said: “This is disgraceful, it’s disgusting and it’s not to be tolerated. X has got to get a grip of this.” He described the images as “unlawful”, and added, “We will take action on this because it’s simply not tolerable.”

New year, new read. Save 40% off an annual subscription this January.

Following the Online Safety Act 2023, platforms such as X must undertake risk assessments to identify harmful uses of their services, and then take mitigating steps. Even the most basic risk assessment of Grok would have identified its capacity to generate non-consensual intimate imagery and CSAM, or child sexual abuse material, indicating the need for effective safeguards.

But we don’t know whether X’s risk assessment covers this, as it’s not publicly available. Proposals to strengthen both the content and publication of risk assessments were rejected during the passage of the Online Safety Bill.

The act also says platforms must take proportionate steps to prevent us from encountering unlawful intimate imagery on their services, and to remove it swiftly when notified. Not all of Grok’s images contravened criminal law, but where they did – if X was complying with the law and Ofcom enforcing it – we should not have seen much of this material.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Ofcom needs to decide what do about it. The regulator’s first statement in response to this tsunami of abuse reports blandly restated the current law; only after a princess was implicated did it start asking questions of the platform. Ofcom has the power to fine companies that fail to comply with the law, but my fear is that this will not lead to lasting change.

The Online Safety Act also requires that platforms “swiftly take down” any illegal material. Yet sexualised images of many women are still on X, despite them being reported. Again, this is not surprising, as a study conducted in 2024 by researchers in the US showed a dismal response from X to reports of non-consensual nudity. This is why a legal requirement for platforms to remove this material within 48 hours on pain of a significant financial penalty, similar to the US Take It Down Act, is being put forward by a cross-bench group of peers, led by Charlotte Owen. So far, though, the government has rejected such calls.

And there’s no point in victims contacting Ofcom directly, as it has no power to deal with individual complaints – a huge gap in our regulatory regime. In 2021, when giving evidence in parliament, I said that if we didn’t empower the regulator to act for individuals, all we were saying to victims was: “You need to sit back, watch and wait, and hope that in a few years’ time something will change.” Here we are: still watching, still waiting.

It’s long been an offence to share an intimate image without consent, and the law was updated to include digital forgeries such as deepfakes via the Online Safety Act. The law applies to “intimate” images, which includes people being shown in underwear. Grok publishing images of underwear could be construed as unlawful, but not bikinis. However, where there are many prompts producing a clearly sexual scenario, they can constitute an “intimate” image.

Creating such images, or asking a program such as Grok to generate them, was criminalised last year after a hard-fought campaign. But this law doesn’t apply yet, and the government has so far refused to say when it will come into force. It does have plans to ban “nudification” tools, though we don’t yet know whether this will cover chatbots like Grok.

This is more important than ever, following the announcement by OpenAI that it plans to allow “erotica” on ChatGPT. The organisation says it won’t allow non-consensual porn. But then X has assured us of the same thing, yet that content continues to proliferate. Meta hosted a user-created chatbot that offered the character “submissive schoolgirl”, while others permit incest role-play. Grok has also been used to publish violent rape fantasies. Of course, this risks normalising and minimising sexual violence.

We are in uncharted territory, where a chatbot is an instrument that works with men to humiliate women into silence. This is “chatbot-driven” sexual abuse and it’s already occurring at scale, primarily targeting women and girls. And it is likely to intensify in ways we cannot yet predict.

[Further reading: Donald Trump is leading the UK to a dark place]

Content from our partners
The “Big North-West Upgrade” begins
Modernising government: Navigating legacy challenges in the AI era
Individuals – not just offenders