Support 100 years of independent journalism.

  1. Culture
10 August 2021

Can the film industry be trusted with AI technology?

After a director’s controversial use of AI to recreate the voice of the late chef Anthony Bourdain, the industry is walking into an ethical minefield.

By Ed Lamb

Films and TV shows love to envisage what’s coming next – from Blade Runner to The Terminator; Back to the Future to 2001: A Space Odyssey. Often these visions turn out to be little more than fantasy (much as we were all hoping for Marty McFly’s hoverboard and self-tying shoes in 2015). But some of these farfetched ideas do, eventually, appear in reality. The rapid development of artificial intelligence, which can now fly drones, drive cars, write music and even paint in the style of Van Gogh, means that some aspects of these distant futures are just around the corner.

July brought the UK release of the documentary Roadrunner: A Film About Anthony Bourdain, directed by Morgan Neville. The film proved controversial: a few of its lines, seemingly narrated by Bourdain, were in fact an AI simulation of his voice, created from archive material. Bourdain’s widow has expressed disapproval at the simulation, while Neville has happily defended what he calls a “modern storytelling technique”. It is true that Bourdain wrote the words, but he never said them aloud.

Neville’s decision to use AI is an uncomfortable one, raising a number of moral conundrums. Bourdain took his own life in 2018, so the idea of resurrecting his voice particularly macabre. What’s more, had Neville not admitted to having used AI in a magazine interview, there’d be no way for us as viewers to know these lines were never spoken by the chef. AI is now advanced enough to deceive us.

[See also: Anthony Bourdain was food’s first rockstar – and so much more]

The presence of a deceased star in a film made after their death has not always been seen as controversial: when the actress Carrie Fisher, who died in 2016, appeared three years later in Star Wars: The Rise of Skywalker (2019), fans were impressed by the technology. The difference in this case was that scenes unused in previous films were repurposed – it was still Fisher’s face and voice, but her hair, costume and movement had been simulated. Fisher was always meant to be part of the film, and, crucially, there was transparency about the techniques used to make it happen.

Select and enter your email address Quick and essential guide to domestic and global politics from the New Statesman's politics team. A weekly newsletter helping you fit together the pieces of the global economic slowdown. The New Statesman’s global affairs newsletter, every Monday and Friday. The New Statesman’s weekly environment email on the politics, business and culture of the climate and nature crises - in your inbox every Thursday. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A newsletter showcasing the finest writing from the ideas section and the NS archive, covering political ideas, philosophy, criticism and intellectual history - sent every Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

But the case of the Bourdain film reveals an industry walking into an ethical minefield, failing to consider the practical consequences of increasingly sophisticated technologies. The use of AI to simulate voice and action distances performers from their work; when we hear or witness something which, though based on real human activity, was not created by it, it is more difficult to attribute the product to its model.

Content from our partners
Why public health policy needs to refocus
The five key tech areas for the public sector in 2023
You wouldn’t give your house keys to anyone, so why do that with your computers?

Ironically, these philosophical questions are raised in films and TV plotlines themselves – just think of HBO’s Westworld or certain episodes of Black Mirror. A storyline from the animated Netflix sitcom BoJack Horseman involves the titular character having all his scenes in a movie replaced with AI simulations.

Far more troubling is the issue of consent: you can’t agree to something if you’re dead. Even if Bourdain’s family had approved of recreating his voice, there is no way he could have given his blessing.

The use of AI to create “deepfakes”, digitally altered images that make someone appear as someone else, has already posed huge problems in pornography. Thanks to “deepfake” technology, someone only needs to get hold of a few of your selfies to give you the starring role in a sex scene – no consent needed. “Deepfakes” are also used to spread misinformation: watching the comedian Jordan Peele ventriloquising President Obama using AI is amusing, until we remember that, during the 2019 US election, a social media video was doctored to make the Democratic speaker Nancy Pelosi look like she was slurring her words.

[See also: The doctored video of Nancy Pelosi shared by Trump is a chilling sign of things to come]

AI is already being mishandled in the entertainment industry. Using AI responsibly is particularly difficult because the technology develops so rapidly and its capabilities are so hard to predict. As legislation lags behind real-world usage, the attitudes of creators such as Neville are of paramount importance.

As we become more familiar with the capability of AI, the emergence of a popular morality around AI ethics must reinforce both legislative and institutional standards higher up. We live in a world where AI plays an increasingly prominent role – but though it might seem like its dominance is inevitable, it is crucial not to be taken in by the myth that AI development is a runaway train over which we have no control. As John Tasioulas, the director of the newly established Institute for Ethics in AI at Oxford University, has stated: “AI is not a matter of destiny, but instead involves successive waves of highly consequential human choices.”

It’s time for an industry that so often studies the future of humanity to start considering its own. In Westworld, an AI is asked “Are you real?” The reply comes: “If you can’t tell, does it matter?” We ought to answer: yes, it does.