Support 100 years of independent journalism.

  1. Culture
10 August 2021

Can the film industry be trusted with AI technology?

After a director’s controversial use of AI to recreate the voice of the late chef Anthony Bourdain, the industry is walking into an ethical minefield.

By Ed Lamb

Films and TV shows love to envisage what’s coming next – from Blade Runner to The Terminator; Back to the Future to 2001: A Space Odyssey. Often these visions turn out to be little more than fantasy (much as we were all hoping for Marty McFly’s hoverboard and self-tying shoes in 2015). But some of these farfetched ideas do, eventually, appear in reality. The rapid development of artificial intelligence, which can now fly drones, drive cars, write music and even paint in the style of Van Gogh, means that some aspects of these distant futures are just around the corner.

July brought the UK release of the documentary Roadrunner: A Film About Anthony Bourdain, directed by Morgan Neville. The film proved controversial: a few of its lines, seemingly narrated by Bourdain, were in fact an AI simulation of his voice, created from archive material. Bourdain’s widow has expressed disapproval at the simulation, while Neville has happily defended what he calls a “modern storytelling technique”. It is true that Bourdain wrote the words, but he never said them aloud.

Neville’s decision to use AI is an uncomfortable one, raising a number of moral conundrums. Bourdain took his own life in 2018, so the idea of resurrecting his voice particularly macabre. What’s more, had Neville not admitted to having used AI in a magazine interview, there’d be no way for us as viewers to know these lines were never spoken by the chef. AI is now advanced enough to deceive us.

[See also: Anthony Bourdain was food’s first rockstar – and so much more]

The presence of a deceased star in a film made after their death has not always been seen as controversial: when the actress Carrie Fisher, who died in 2016, appeared three years later in Star Wars: The Rise of Skywalker (2019), fans were impressed by the technology. The difference in this case was that scenes unused in previous films were repurposed – it was still Fisher’s face and voice, but her hair, costume and movement had been simulated. Fisher was always meant to be part of the film, and, crucially, there was transparency about the techniques used to make it happen.

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s global affairs newsletter, every Monday and Friday. A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A weekly dig into the New Statesman’s archive of over 100 years of stellar and influential journalism, sent each Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.
I consent to New Statesman Media Group collecting my details provided via this form in accordance with the Privacy Policy

But the case of the Bourdain film reveals an industry walking into an ethical minefield, failing to consider the practical consequences of increasingly sophisticated technologies. The use of AI to simulate voice and action distances performers from their work; when we hear or witness something which, though based on real human activity, was not created by it, it is more difficult to attribute the product to its model.

Ironically, these philosophical questions are raised in films and TV plotlines themselves – just think of HBO’s Westworld or certain episodes of Black Mirror. A storyline from the animated Netflix sitcom BoJack Horseman involves the titular character having all his scenes in a movie replaced with AI simulations.

Far more troubling is the issue of consent: you can’t agree to something if you’re dead. Even if Bourdain’s family had approved of recreating his voice, there is no way he could have given his blessing.

Content from our partners
Stella Creasy: “Government’s job is to crowdsource, not crowd-control”
Executing business ideas is easier than ever, and it’s going to kill a lot of companies
Creating digitally enabled police forces across the UK

The use of AI to create “deepfakes”, digitally altered images that make someone appear as someone else, has already posed huge problems in pornography. Thanks to “deepfake” technology, someone only needs to get hold of a few of your selfies to give you the starring role in a sex scene – no consent needed. “Deepfakes” are also used to spread misinformation: watching the comedian Jordan Peele ventriloquising President Obama using AI is amusing, until we remember that, during the 2019 US election, a social media video was doctored to make the Democratic speaker Nancy Pelosi look like she was slurring her words.

[See also: The doctored video of Nancy Pelosi shared by Trump is a chilling sign of things to come]

AI is already being mishandled in the entertainment industry. Using AI responsibly is particularly difficult because the technology develops so rapidly and its capabilities are so hard to predict. As legislation lags behind real-world usage, the attitudes of creators such as Neville are of paramount importance.

As we become more familiar with the capability of AI, the emergence of a popular morality around AI ethics must reinforce both legislative and institutional standards higher up. We live in a world where AI plays an increasingly prominent role – but though it might seem like its dominance is inevitable, it is crucial not to be taken in by the myth that AI development is a runaway train over which we have no control. As John Tasioulas, the director of the newly established Institute for Ethics in AI at Oxford University, has stated: “AI is not a matter of destiny, but instead involves successive waves of highly consequential human choices.”

It’s time for an industry that so often studies the future of humanity to start considering its own. In Westworld, an AI is asked “Are you real?” The reply comes: “If you can’t tell, does it matter?” We ought to answer: yes, it does.