New Times,
New Thinking.

Advertorial content sponsored by the Authors' Licensing and Collecting Society

Authors and artificial intelligence: what next?

In 2024, we need progress on regulation to protect authors, creators and the wider economy from potential harms.

By Richard Combes

ChatGPT was released to the public 12 months ago, accruing one hundred million users in just two months – a feat that took Facebook more than four years to accomplish. Its release ignited much excitement and interest in the capabilities of generative artificial intelligence, whilst also raising urgent questions on the nature of human creativity, the future of the creative industries and potential widespread copyright infringement. These questions have yet to be answered. 

Fears around AI and its potential to further devalue human creativity and even replace human authors were cited as a reason for the US writers’ strike that began in May. In September, the Authors Guild filed a lawsuit against OpenAI (the parent company of ChatGPT) on behalf of a group of prominent authors, accusing the company of using copyrighted works in the training of their model: “The success and profitability of OpenAI are predicated on mass copyright infringement without a word of permission from or a nickel of compensation to copyright owners”.

Policy responses to these issues have moved at different speeds in different places. The Biden administration issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, while the EU has developed its own Artificial Intelligence Act. Despite hosting the inaugural AI Safety Summit, the UK government has not developed a similar framework. This lack of international harmony risks undermining copyright protections, allowing bad actors to find loopholes where protections are weakest, with consequences for creators globally.

Although no plans for comprehensive regulation have yet been announced, the UK government is developing a code of practice to govern the use of copyrighted works by AI companies. AI capabilities are developing at a rapid pace and this constantly shifting landscape risks making any code of conduct redundant unless it is underpinned by high-level principles that are clear and stable. At ALCS we have published a set of principles that policymakers must consider when developing legislation in this area:

  • Human authors should be compensated for their work. New technologies must respect the established precedent of no use without payment. Licensing is an effective way of ensuring authors are paid for use of their work.
  • New developments in technology cannot be used as an excuse to erode the established fundamental rights of authors.
  • Licensing terms should be carefully considered to ensure they include every use of an author’s work. New technologies are capable of analysing an existing work and using the content to create many other works. Authors should be compensated every time their work has been used to create something new, not just when their work is first analysed.
  • Stakeholders must acknowledge the limitations of AI. AI is trained on data created by humans, and so has a tendency to repeat and reinforce human biases and misconceptions.
  • Policymakers and industry should seek applications of AI that support creators. If AI is used to diminish the role of creators in the creative process, it will have devastating cultural and economic consequences.

What can be done to restore some order and balance to the current situation? A backdated licence for generative AI companies could be an effective way of compensating authors for the widespread use of copyright works that has already taken place in the training of AI models. However, many authors would understandably have reservations about licensing their works to permit ongoing usage, if they are being used to generate derivative works that directly compete with their own and threaten to further jeopardise authors’ ability to make a living from their writing.

The feelings of many within the creative community were encapsulated by Ed Newton Rex, the former executive at Stability AI who recently resigned in protest: “I think that ethically, morally, globally, I hope we’ll all adopt this approach of saying, ‘you need to get permission to do this from the people who wrote it, otherwise, that’s not okay’”. 

Given what’s at stake, economically, socially and culturally, we hope that 2024 will see this debate evolve constructively, towards solutions which both enable the realisation of new technologies and preserve and protect the rights of individual creators.      

Keep updated on our work around authors and AI by visiting