Support 100 years of independent journalism.

  1. Science & Tech
17 May 2016

Google’s AI model has learned to write poetry using romance novels

The system could lead to the development of future AI capable of communicating with humans.

By Hasan Chowdhury

In Mountain View, California, a legion of competitors is vying for first place in the race to develop a self-sufficient form of artificial intelligence (AI).

While self-driving cars are fast defining the future of transportation, and AI gaming systems are becoming a reality, the idea of a truly autonomous AI is often tied to the promise of computers capable of tackling the complexities of language. Growth had previously been stunted due to the primitive processing powers of these machines. However, recent advancements at Google headquarters in Silicon Valley have helped researchers take a step towards a linguistically adept AI.

Google Brain, a research project exploring deep learning (a branch of machine learning), presented a paper earlier this month at the International Conference on Learning Representations, detailing the methods employed to teach its AI how to better communicate with language.

The team of computer scientists, working with Stanford University and University of Massachusetts researchers, fed 11,000 unpublished books to an AI model – almost 3,000 of which were romance novels. The system, designed to mimic a biological neural network, was fed with numerous novels in the hopes that it would grasp the grammatical and syntactic nuances of coherent sentences.

The team then repeatedly handed the network two lines from the books, giving it the task of writing sentences that would demonstrate a natural, intelligible progression between the two lines. The result? An AI that writes strangely existential poetry (the lines in bold were presented to the AI by the team):

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. A weekly newsletter helping you fit together the pieces of the global economic slowdown. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The New Statesman’s global affairs newsletter, every Monday and Friday. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s weekly environment email on the politics, business and culture of the climate and nature crises - in your inbox every Thursday. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A newsletter showcasing the finest writing from the ideas section and the NS archive, covering political ideas, philosophy, criticism and intellectual history - sent every Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.

there is no one else in the world. 
there is no one else in sight.
they were the only ones who mattered. 
they were the only ones left.
he had to be with me.
she had to be with him.
i had to do this.
i wanted to kill him. 
i started to cry. 
i turned to him.

Content from our partners
Small businesses can be the backbone of our national recovery
Railways must adapt to how we live now
“I learn something new on every trip"

he was silent for a long moment
he was silent for a moment.
it was quiet for a moment.
it was dark and cold.
there was a pause.
it was my turn.

Though some poetry produced by the AI was nonsensical, the examples above demonstrate that the system can write a string of comprehensible sentences. In the paper, the team outlined the mechanics behind the AI’s smooth sentence-to-sentence transition. Earlier blueprints for a sentence-generating AI used something called the “standard recurrent neural network language model”, which proved to be inefficient as it “generates sentences one word at a time”. To improve upon this, the new system used something called a variational autoencoder, an unsupervised learning model, which allows the AI to generate new data from the information it’s provided with.

Many romance novels share similarities in plot – a trait which proves to be useful to the evolution of AI linguistic skills. As Google software engineer and research team member Andrew Dai told Buzzfeed News, “Girl falls in love with boy; boy falls in love with a different girl. Romance tragedy.” The inputted books allowed the autoencoder to understand how diverse language could be used to tell stories with fundamentally similar narratives.

The recent research builds on previous forays into machine language processing. A separate team at Google built a chatbot in June 2015 that responded to the question “What is the purpose of life?” with a humanist’s answer: “To serve the greater good,” and a futurist’s answer: “To live forever.”

Successes with Google Brain’s other research projects, led by Senior Fellow Jeffrey Dean, make the outcome of eloquent machines a likely one: Google Search, Google Translate, Gmail and DeepMind’s AlphaGo system have all been influenced by the work of the deep learning research project. Though we are a long way off from seeing a fully operative AI language system, there are promising signs that we will eventually get one.