In Mountain View, California, a legion of competitors is vying for first place in the race to develop a self-sufficient form of artificial intelligence (AI).
While self-driving cars are fast defining the future of transportation, and AI gaming systems are becoming a reality, the idea of a truly autonomous AI is often tied to the promise of computers capable of tackling the complexities of language. Growth had previously been stunted due to the primitive processing powers of these machines. However, recent advancements at Google headquarters in Silicon Valley have helped researchers take a step towards a linguistically adept AI.
Google Brain, a research project exploring deep learning (a branch of machine learning), presented a paper earlier this month at the International Conference on Learning Representations, detailing the methods employed to teach its AI how to better communicate with language.
The team of computer scientists, working with Stanford University and University of Massachusetts researchers, fed 11,000 unpublished books to an AI model – almost 3,000 of which were romance novels. The system, designed to mimic a biological neural network, was fed with numerous novels in the hopes that it would grasp the grammatical and syntactic nuances of coherent sentences.
The team then repeatedly handed the network two lines from the books, giving it the task of writing sentences that would demonstrate a natural, intelligible progression between the two lines. The result? An AI that writes strangely existential poetry (the lines in bold were presented to the AI by the team):
there is no one else in the world.
there is no one else in sight.
they were the only ones who mattered.
they were the only ones left.
he had to be with me.
she had to be with him.
i had to do this.
i wanted to kill him.
i started to cry.
i turned to him.
—
he was silent for a long moment
he was silent for a moment.
it was quiet for a moment.
it was dark and cold.
there was a pause.
it was my turn.
Though some poetry produced by the AI was nonsensical, the examples above demonstrate that the system can write a string of comprehensible sentences. In the paper, the team outlined the mechanics behind the AI’s smooth sentence-to-sentence transition. Earlier blueprints for a sentence-generating AI used something called the “standard recurrent neural network language model”, which proved to be inefficient as it “generates sentences one word at a time”. To improve upon this, the new system used something called a variational autoencoder, an unsupervised learning model, which allows the AI to generate new data from the information it’s provided with.
Many romance novels share similarities in plot – a trait which proves to be useful to the evolution of AI linguistic skills. As Google software engineer and research team member Andrew Dai told Buzzfeed News, “Girl falls in love with boy; boy falls in love with a different girl. Romance tragedy.” The inputted books allowed the autoencoder to understand how diverse language could be used to tell stories with fundamentally similar narratives.
The recent research builds on previous forays into machine language processing. A separate team at Google built a chatbot in June 2015 that responded to the question “What is the purpose of life?” with a humanist’s answer: “To serve the greater good,” and a futurist’s answer: “To live forever.”
Successes with Google Brain’s other research projects, led by Senior Fellow Jeffrey Dean, make the outcome of eloquent machines a likely one: Google Search, Google Translate, Gmail and DeepMind’s AlphaGo system have all been influenced by the work of the deep learning research project. Though we are a long way off from seeing a fully operative AI language system, there are promising signs that we will eventually get one.