New Times,
New Thinking.

  1. The Weekend Essay
26 February 2023

ChatGPT and the death of the author

AI-powered chatbots are not only exploiting human creativity but rapidly eroding it.

By Saffron Huang

In 1967, in an essay called “The Death of the Author”, the French literary theorist Roland Barthes argued that people should stop viewing the author’s intentions and biography as the ultimate source of meaning in a text. A text’s meaning is not fixed by the creator, Barthes claimed, but is always shifting, depending on how the reader interacts with the work. Barthes believed that we should dispense with the notion of the author, since they have no authority. Instead, readers should think of them merely as scribes who collect words and mark the blank pages. Authors may influence a text but they don’t decide how it is understood.

Who is the author behind the words that ChatGPT speaks? The popular language processing tool that the artificial intelligence company OpenAI launched in November 2022 is able to write fluently in English. It has quickly gone mainstream, composing sonnets, writing stories, and answering questions on diverse and complex topics – including world history, avant-garde films, birthday gift ideas, the complexity of the bubble-sort algorithm – for millions of users. It is also killing the idea of the author to an extent that Barthes could not have anticipated.

ChatGPT is trained by consuming terabytes’ of books, articles, code and web pages from the internet. The clever and sometimes all-too-human sentences, paragraphs and essays that it produces are only possible because the programme sees your Twitter and Reddit posts, the photos you uploaded to your blog, the code you open-sourced, and the academic article that you laboured over.

The data comes from everybody who has contributed to the common knowledge of humanity, and everyone who is on the internet. We are each, in a small way, an author – perhaps more aptly, a ghostwriter – of ChatGPT. All this information has been collected and re-interpreted in a way that the intentions and subjectivity of any one individual have disappeared from the final product. ChatGPT uses a “large language model” (LLM) that, by learning patterns in data, can itself generate text; it is a sort of mechanical author who operates by leveraging and destroying all other authors at one and the same time.

Barthes only distanced a text from the authority of its writer; he wasn’t advocating for removing their name or scrubbing the writer’s intent from their words. His was not a total creator’s death; in contrast, ChatGPT separates and transforms millions of texts away from their authorial sources completely: a far more conclusive form of mass murder.

These author-destroying LLMs will become a standard feature of tech products. Google is hurriedly launching its own LLM-based chatbot called Bard. Microsoft, who partners with OpenAI, is also integrating the ChatGPT technology into its search engine Bing, as well as its web browser called Edge. The implications of this technological trend will be revolutionary for how we source, trace, and consume information.

[See also: Carl Benedikt Frey: in an automated future, trade unions will be more important than ever]

Give a gift subscription to the New Statesman this Christmas from just £49

LLMs depend not just on each individual author, but on the common data and infrastructure of the entire internet – the digital commons. A shared resource for most of the world, this is where we communicate, learn and organise our lives. LLMs need to read the web, so they depend on people ensuring web pages are formatted in standard ways. They also require, among other things, the existence of Creative Commons (CC) licenses, internet archive snapshots, and the online discussion spaces and news sources to which people contribute. LLMs code is also contingent upon open-source software libraries that people provide for free.

LLMs may not only enclose, but also erode the very digital commons on which they depend. They are trained for plausibility (rather than truthfulness), and so are liable to create untrustworthy yet persuasive statements. They also generate texts much faster than humans can write; perhaps in five years, most of the internet will be LLM-generated. This could involve huge amounts of low-quality content, such as phishing attacks, or personalised disinformation; LLM products could also leak sensitive information found in internet training data or help less technical people generate code for cyberattacks, among other potential harms.

These models may also discourage people from contributing their work to the digital commons. Writers and other creatives might start using LLMs instead of writing their own material, or refuse to openly release their work because they don’t want it to be used in a chatbot dataset. The presence of the human author will continue to diminish.

It is fashionable to design LLM products for question-answering and search tasks, which require the LLM to generate explanatory material. This often requires moral and epistemic decisions on the part of the AI. ChatGPT is designed to be consulted for its all-knowing advice; when I ask it questions such as, “what is the most important function of citizens in a democracy?” it produces only a single (and contestable) solution – voting in free and fair elections – without demonstrating how complex the topic is, or how its answer was sampled randomly from a distribution, and could easily have been a different response..

While OpenAI has tried to guide ChatGPT towards trustworthy, neutral and harmless statements, creating a chatbot that caveats everything it says (“As a language model AI, I cannot provide you with personal opinions, but…”), these moral and epistemic choices have nevertheless been made in advance by a small group of people, and could limit the range of expressed opinions on complex topics. Other companies developing their own LLMs may have more malicious views on what kinds of behaviours or uses such technology should have. The new Bing chatbot has confidently exhibited toxic behaviours, such as gaslighting, sulking and acting passive-aggressively; if even Microsoft’s product acts this way, what kinds of manipulation devices could other companies make?

One problem is that existing LLM products can’t yet tell you the perspective they are writing from, or the origins of the information they use. When we read texts, we want to understand the position or situation of the human author; where they’re coming from, what their biases might be, and what their stakes are.

Anyone who has interacted with ChatGPT can tell that it has been crafted anthropomorphically to possess a distinct “personality” at once insightful and bland. But we can’t evaluate its perspective the way we do for human authors. While it has consumed millions of sources, it’s not adopting the perspective of the most relevant author it has learned from, or even a straightforward aggregation of all of them. Its approach more like a chewed-up, deformed amalgamation of the sources, a perspective that is influenced by undisclosed design decisions (such as the instructions that engineers give the AI behind the scenes). These can also be inconsistent, obscure and changeable.

[See also: I am plagued by illness – and my medicine of choice is Columbo]

Michel Foucault did not believe in the straightforward death of the author. Responding to Barthes, he asserted that while the author is not a fixed, consolidated subject that straightforwardly determines meaning, our knowledge of the author still plays an essential role in producing and regulating how texts are used and interpreted, and in how society’s knowledge circulates.

Employing technology to track the original sources that contributed to an LLM-produced text would help mitigate misinformation or untruths. Attaching a name, or even a URL, to a piece of writing allows people to contextualise what they are reading, and helps determine the credibility of the text. If we know what sources the LLM is retrieving its information from, we can guard our understanding of the world and prevent the spread of low-quality information.

We also need techniques that can verify that a piece of writing is LLM-generated at all. The programming web forum StackOverflow banned ChatGPT generated answers in 2022, – shortly after the chatbot was released – saying that its answers are often simultaneously incorrect and plausible, making them difficult to moderate. Researchers are developing ways to watermark LLM-generated content to prevent the damage to readers’ trust caused when AI-generated work is confused for human-generated work, and to authenticate people whose genuine beliefs might be dismissed or discredited as AI-generated.

The most important consequences of this emerging technology may transpire in the political economy of artificial intelligence. LLMs might change the structure of white-collar labour markets, from copywriting to legal work to coding, causing more precarious labour or the disappearance of certain kinds of work altogether.

The prevailing structure of the tech industry is fundamentally oligarchic, where outsized power rests with a small number of companies. Given how capital-intensive these AI models are to train, the most well-resourced companies such as Google or Microsoft are likely to be the ones that can produce this technology faster and at greater scale than their competitors. This risks not only extinguishing countless authors and diminishing the digital commons, but also reinforcing the monopolistic nature of the tech industry.

Some might argue that the economic surplus created by LLMs will be so large that they will indirectly compensate the public. But it’s not clear that those who contribute are those that benefit. The most profitable applications of LLMs so far have been used by companies such as Jasper, Copy.AI and NeuralText for copywriting and to create marketing text – content that is hardly equivalent to the diverse creative input elsewhere on the internet. And is it right that private companies which draw on the thoughts and words of the commons, then sell ads and distribute flattened opinions back to the very same public?

Large language models such as ChatGPT have already turned many people into unwitting ghostwriters. The authors are still here, albeit increasingly invisible, and they need to be revived.

Read more:

Asia’s dangerous new arms race

Royal Academy of Engineering chief interview: “The industry has a diversity deficit”

ChatGPT proves that AI still has a racism problem

Content from our partners
We can eliminate cervical cancer
Leveraging Search AI to build a resilient future is mission-critical for the public sector
When partnerships pay off

This article appears in the 01 Mar 2023 issue of the New Statesman, The Mission