New Times,
New Thinking.

  1. Comment
4 January 2024

You won’t be able to ignore AI in 2024

Computer-generated content is about to flood the world.

By Will Dunn

Mickey Mouse would perhaps be the most revealing person to ask about what AI holds in store for 2024. The copyright to the cartoon rodent’s first movie, Steamboat Willie, expired on 1 January and immediately the internet began taking the Mickey: cobbled-together games and videos began to proliferate, worthless NFTs were hastily created. The most attention points went to a junk horror film, the entire premise of which – “the mouse is out” – seems to be that it is now possible to cash in on the expiration of Disney’s intellectual property.

A growing body of evidence suggests such rehashing is happening at a much bigger scale, whether copyright protections exist or not. Large language models (LLMs) promise a future of endlessly derivative, mechanically reprocessed culture, in which every creative idea is automatically remixed into a soup of algorithmic content opportunities. 

Last month Reid Southen, a concept artist who has worked for most of the major studios, began asking generative AI models to make him images of movie characters; they obliged, creating pictures that look almost identical to screenshots from well-known films. The logical inference from this is that these models have been “trained” using films and characters that are very much within their copyright periods.

Southen showed these images to Gary Marcus, the AI expert and professor of psychology and cognitive science at New York University, and together they’ve collected data for a paper, to be published this week, which shows given basic two-word prompts (“animated sponge”; “videogame Italian”), LLMs produce images that are instantly recognisable as SpongeBob or Mario.

Marcus calls this “plagiaristic output”: whether it’s actually plagiarism will be decided by the courts (the New York Times is already suing OpenAI and Microsoft for allegedly using and reproducing its reporting), but it looks an awful lot like billions of dollars in revenue are being made using creative works that haven’t been paid for.

It may be that the world’s tech and media companies are currently negotiating for licensing deals that will allow LLMs to endlessly reproduce SpongeBob, but Marcus says it is going to be very difficult for such deals to encompass all the creative works used to create the LLM, because (in a great example of Silicon Valley entitlement) the systems were never built to attribute their results to anyone else. “The way that these systems work is they break everything into little bits, and recompose the bits in ways that they don’t have a clear handle on,” Marcus told me. When it comes to saying who owns what, “the [AI] companies aren’t telling you, and they don’t know themselves”.

What we do know is that the fundamental purpose of an LLM is to calculate the most probable output to be accepted. To a Western internet user, a picture of SpongeBob is the most acceptable answer to a request to see an “animated sponge”. What this means, says Marcus, is that any LLM “is built to be derivative. It’s not an accident that these things are happening. Probably, the better the systems get at predicting in general, the more likely they are to create potentially infringing things.”

Give a gift subscription to the New Statesman this Christmas from just £49

The rewards of the AI boom (Microsoft’s market value rose by a trillion dollars last year) incentivise fast development and asking for forgiveness rather than permission. Michael Wooldridge, professor of computer science at Oxford University, has been studying AI for four decades. “We really haven’t seen anything like this in the technology world since the browser wars of the 1990s,” he told me, “when technology companies desperately tried to claw ahead in the emerging internet marketplace.”

Marcus thinks there may be a parallel between today’s AI companies and the audio file-sharer Napster, which quickly grew to tens of millions of users by allowing people to circumvent copyright, before a series of high-profile lawsuits (brought by Metallica and Dr Dre) led to its demise. A revolution did follow (Spotify now has more than half a billion users) but it didn’t belong to Napster.

That doesn’t mean the Mickey Mouse principle won’t appear elsewhere, however, and this is especially true in politics; the UK and US elections this year will present new opportunities to automate messaging, leading to misinformation. ChatGPT may be as important to the elections of 2024 as Facebook was to voting in 2016 – which is to say, no one will ever really be able to say how much difference it made, but it’ll certainly get the blame.

“We often don’t understand watershed moments in technology history until long after the fact,” said Wooldridge. “But 2023 was, I think, different. We saw the mass take-up of a new category of AI tool, and both the technology itself and the marketplace evolved at dizzying speed; 2024 looks set to be at least as dramatic.”

[See also: AI threatens to take the fun out of work]

Content from our partners
Putting citizen experience at the heart of AI-driven public services
Skills policy and industrial strategies must be joined up
How the UK can lead the transition to net zero

Topics in this article : ,