Show Hide image

Apocalypse soon: the scientists preparing for the end times

A growing community of scientists, philosophers and tech billionaires believe we need to start thinking seriously about the threat of human extinction.

Illustration: Darrel Rees/Heart

The men were too absorbed in their work to notice my arrival at first. Three walls of the conference room held whiteboards densely filled with algebra and scribbled diagrams. One man jumped up to sketch another graph, and three colleagues crowded around to examine it more closely. Their urgency surprised me, though it probably shouldn’t have. These academics were debating what they believe could be one of the greatest threats to mankind – could superintelligent computers wipe us all out?

I was visiting the Future of Humanity Institute, a research department at Oxford University founded in 2005 to study the “big-picture questions” of human life. One of its main areas of research is existential risk. The physicists, philosophers, biologists, economists, computer scientists and mathematicians of the institute are students of the apocalypse.

Predictions of the end of history are as old as history itself, but the 21st century poses new threats. The development of nuclear weapons marked the first time that we had the technology to end all human life. Since then, advances in synthetic biology and nanotechnology have increased the potential for human beings to do catastrophic harm by accident or through deliberate, criminal intent.

In July this year, long-forgotten vials of smallpox – a virus believed to be “dead” – were discovered at a research centre near Washington, DC. Now imagine some similar incident in the future, but involving an artificially generated killer virus or nanoweapons. Some of these dangers are closer than we might care to imagine. When Syrian hackers sent a message from the Associated Press Twitter account that there had been an attack on the White House, the Standard & Poor’s 500 stock market briefly fell by $136bn. What unthinkable chaos would be unleashed if someone found a way to empty people’s bank accounts?

While previous doomsayers have relied on religion or superstition, the researchers at the Future of Humanity Institute want to apply scientific rigour to understanding apocalyptic outcomes. How likely are they? Can the risks be mitigated? And how should we weigh up the needs of future generations against our own?

The FHI was founded nine years ago by Nick Bostrom, a Swedish philosopher, when he was 32. Bostrom is one of the leading figures in this small but expanding field of study. It was the first organisation of its kind in the UK, and Bostrom is also an adviser on the country’s second: the Centre for the Study of Existential Risk at Cambridge University, which was launched in 2012. There are a few similar research bodies in the US, too: in May, the Future of Life Institute opened in Boston at MIT, joining the Machine Intelligence Research Institute in Berkeley, California.

“We’re getting these more and more powerful technologies that we can use to have more and more wide-ranging impacts on the world and ourselves, and our level of wisdom seems to be growing more slowly. It’s a bit like a child who’s getting their hands on a loaded pistol – they should be playing with rattles or toy soldiers,” Bostrom tells me when we meet in his sunlit office at the FHI, surrounded by yet more whiteboards. “As a species, we’re giving ourselves access to technologies that should really have a higher maturity level. We don’t have an option – we’re going to get these technologies. So we just have to mature more rapidly.”

I’d first met Bostrom in London a month earlier, at the launch of his most recent book, Superintelligence: Paths, Dangers, Strategies. He had arrived late. “Our speaker on superintelligence has got lost,” the chair joked. It was a Thursday lunchtime but the auditorium at the RSA on the Strand was full. I was sitting next to a man in his early twenties with a thick beard who had lent in to ask, “Have you seen him argue that we’re almost certainly brains in vats? It sounds so out-there, but when he does it it’s so cool!” He looked star-struck when Bostrom eventually bounded on stage.

Dressed in a checked shirt, stripy socks and tortoiseshell glasses, Bostrom rushed through his presentation, guided by some incongruously retro-looking PowerPoint slides. The consensus among experts in artificial intelligence (AI) is that they will develop a computer with human-level intelligence in the next 50 years, he said. Once they have succeeded in doing this, it might not take so long to develop machines smarter than we are. This new superintelligence would be extremely powerful, and may be hard to control. It would, for instance, be better at computer programming than human beings, and so could improve its own capabilities faster than scientists could. We may witness a “superintelligence explosion”
as computers begin improving themselves at an alarming rate.

If we handle it well, the development of superintelligence might be one of the best things ever to happen to humanity. These smart machines could tackle all the problems we are too stupid to solve. But it could also go horribly wrong. Computers may not share our values and social understanding. Few human beings set out to harm gorillas deliberately, Bostrom pointed out, but because we are cleverer, society is organised around our needs, not those of gorillas. In a future controlled by ultra-smart machines, we could well be the gorillas.

After the book launch, the publishers invited me to join them and Bostrom for a drink. We drank white wine, and Bostrom asked for green tea. Over a bowl of wasabi peanuts, he mentioned casually that he likes to download lectures and listen to them at three times the normal speed while he exercises. “I have an app that adjusts the pitch so it’s not like a Mickey Mouse voice,” he explained, assuming perhaps that this was the reason for my surprised expression. I sensed that he was quite keen to leave us to our wine. “Nick is the most focused person I know,” a colleague later told me.

****

Bostrom hated school when he was growing up in Helsingborg, a coastal town in Sweden. Then, when he was 15, he stumbled on the philosophy section of his local library and began reading the great German philosophers: Nietzsche, Schopenhauer, Kant. “I discovered there was this wider life of the mind I had been oblivious to before then, and those big gates flung open and I had this sense of having lost time, because I had lost the first 15 years of my life,” he told me. “I had this sense of urgency that meant I knew I couldn’t lose more years, or it could be too late to amount to anything.”

There was no real “concept” of existential risk when Bostrom was a graduate student in London in the mid-1990s. He completed a PhD in philosophy at the London School of Economics while also studying computational neuroscience and astrophysics at King’s College London. “People were talking about human extinction but not about ways of thinking about permanently destroying our future,” he says.

Yet he began to meet people through the mailing lists of early websites who were also drawn to the ideas that were increasingly preoccupying him. They often called themselves “transhumanists” – referring to an intellectual movement interested in the transformation of the human condition through technology – and they were, he concedes, “mainly crackpots”.

In 2000 he became a lecturer in philosophy at Yale and then, two years later, he returned to the UK as a postdoctoral fellow at the British Academy. He planned to pursue his study of existential risk in tandem with a philosophy teaching post, until he met the futurologist and multibillionaire computer scientist James Martin, who was interested in his work. In 2005 Martin donated $150m to the University of Oxford to set up the Oxford Martin School, and the FHI was one of the first research bodies to receive funding through the school. It can be a challenge to be taken seriously in a field that brushes so close to science fiction. “The danger is it can deter serious researchers from the field for fear of being mistaken or associated with crackpots,” Bostrom explained. A “univer­sity with a more marginal reputation” might have been less confident about funding such radical work.

Nevertheless, the FHI has expanded since then to include 18 full-time research staff, drawn from a wide range of disciplines. I spoke to Daniel Dewey, a 27-year-old who last year left his job as a software engineer at Google to join the FHI as a research fellow studying machine superintelligence. A few of his colleagues at Google had introduced him to research on the emerging risks of AI, and he began reading about the subject in his spare time. He came across a problem related to the safety of AIs that he couldn’t solve, and it became the hook. “I was thinking about it all the time,” Dewey recalled.

One of the concerns expressed by those studying artificial intelligence is that machines – because they lack our cultural, emotional and social intuition – might adopt dangerous methods to pursue their goals. Bostrom sometimes uses the example of a superintelligent paper-clip maker which works out that it could create more paper clips by extracting carbon atoms from human bodies. “You at least want to be able to say, ‘I want you to achieve this simple goal and not do anything else that would have a dramatic impact on the world,’ ” Dewey explained.

Unfortunately, it turns out that it is very difficult to meet this simple request in practice. Dewey emailed Toby Ord, an Australian philosopher who also works at the FHI, who replied that he didn’t know the answer, either, but if Dewey came to Oxford they could discuss it. He did, and he soon decided that he might be able to “make a difference” at the institute. So he quit Google.

Many of the FHI researchers seem motivated by a strong sense of moral purpose. Ord is also a founder of Giving What We Can, an organisation whose members pledge 10 per cent of their income to help tackle poverty. Ord gives more than this: anything he earns above £20,000, he donates to charity. Despite his modest salary, he plans to give away £1m over his lifetime.

Ord lives in Oxford with his wife, a medical doctor who has also signed up to the giving pledge, and baby daughter. “I’m living off the median income in the UK, so I can’t complain,” he told me, but they live frugally. The couple dine out no more than once a month, and he treats himself to one coffee a week. Ord sees a natural connection between this and his work at the FHI. “I was focusing on how to think carefully and apply academic scholarship to how I give in my life and helping others to give, too. So when it comes to existential risk I’m interested in the idea that another way of helping people is to figure out how to help future generations,” he said.

Ord is working on a report for the government’s chief scientific adviser on risk and emerging technology. Most researchers at the FHI and at the Centre for the Study of Existential Risk hope that such analysis will gradually be integrated into national policymaking. But, for now, both institutions are surviving on donations from individual philanthropists.

****

One of the most generous donors to scientists working in the discipline is Jaan Tallinn, the 42-year-old Estonian computer whizz and co-founder of Skype and Kazaa, a file-sharing program. He estimates that he has donated “a couple of million dollars” to five research groups in the US and three in the UK (the FHI, the CSER and 80,000 Hours, an organisation promoting effective philanthropy, which also has a close interest in existential risk). This year he has given away $800,000 (£480,000).

His involvement in the founding of the CSER came after a chance encounter in a taxi in Copenhagen with Huw Price, professor of philosophy at Cambridge. Tallinn told Price that he thought the chances of him dying in an artificial-intelligence-related disaster were higher than those of him dying of cancer or a heart attack. Tallinn has since said that he was feeling particularly pessimistic that day, but Price was nevertheless intrigued. The computer whizz reminded him of another professional pessimist: Martin Rees, the Astronomer Royal.

In 2003, Rees published Our Final Century, in which he outlined his view that mankind has only a 50/50 chance of surviving to 2100. (In the US the book was published as Our Final Hour – because, Rees likes to joke, “Americans like instant gratification”.) A Ted talk that Rees gave on the same subject in July 2005 has been viewed almost 1.6 million times online. In it, he appears hunched over his lectern, but when he begins to speak, his fluency and energy are electrifying. “If you take 10,000 people at random, 9,999 have something in common: their business and interests lie on or near the earth’s surface. The odd one out is an astronomer and I am one of that strange breed,” he begins.

Studying the distant reaches of the universe has not only given Rees an appreciation of humanity’s precious, fleeting existence – if you imagine Planet Earth’s lifetime as a single year, the 21st century would be a quarter of a second, he says – but also allowed him an insight into the “extreme future”. In six billion years the sun will run out of fuel. “There’s an unthinking tendency to imagine that humans will be there, experiencing the sun’s demise, but any life and intelligence that exists then will be as different from us as we are from bacteria.”

Even when you consider these vast timescales and events that have changed the earth dramatically, such as asteroid impacts and huge volcanic eruptions, something extraordinary has happened in recent decades. Never before have human beings been so able to alter our surroundings – through global warming or nuclear war – or to alter ourselves, as advances in biology and computer science open up possibilities of transforming the way we think and live. So it is understandable that Price immediately saw a connection with Tallinn’s interests.

Price invited him to Cambridge and took Tallinn on a tour of what he describes as the “two birthplaces of existential risk”. First they went for dinner at King’s College, where the pioneering computer scientist Alan Turing was a fellow from 1935 to 1945. Then they went for drinks at the Eagle pub, where in 1953 Francis Crick and James Watson announced that they had cracked the double-helix structure of DNA. When I came to the city to interview Price, he took me on a mini-existential risk tour to echo Tallinn’s, inviting me for a pint of DNA ale one evening with a few CSER researchers. For the second time in several weeks I found myself drinking with people I sensed would rather be in the library.

Tallinn’s first trip to Cambridge was successful. With Price and Rees, he co-founded the CSER, providing the seed funding. The centre already has several high-profile advisers, including the physicist Stephen Hawking, Elon Musk (the multibillionaire behind PayPal and SpaceX) and the ethicist Peter Singer. They are hoping to find funding for a couple of postdoctoral positions in the next year or so. “I see it as part of our role to act as a kind of virus, spreading interest and concern about this issue [existential risk] into other academic disciplines,” Price explained. He hopes that eventually institutions studying potentially dangerous fields will develop a risk-awareness culture. Tallinn is trying to change mindsets in other ways, too. He says he sometimes invests in tech companies as an excuse to “hang around in the kitchen, just so I get a feel of what they are doing and can try and influence the culture”.

I met Price and Tallinn for a coffee in the senior common room at Trinity College. At one point a man ran up to us to slap Tallinn on the back and say: “Hey, Jaan. Remember me? Remember our crazy Caribbean days?”

Tallinn looked confused, and the man seemed to sway from side to side, as if dancing with an imaginary hula girl. It turned out they had met at a conference in 2013 and the dancer was a renowned mathematician (I will spare him any blushes). Price said that when he first arrived at Cambridge Tallinn was a minor celebrity; several dons approached him to thank him for making it easier to speak to their children and grandchildren overseas.

Most of the large donors funding existen­tial risk research work in finance or technology. Neither the CSER nor the FHI publishes details of individual donors, but the Machine Intelligence Research Institute (Miri) in Berkeley does. The three biggest donors to Miri are Peter Thiel of PayPal ($1.38m), the founder of Mt Gox (once the main exchange platform for bitcoins), Jed McCaleb ($631,137) and Tallinn. Tech billionaires, like bankers, are more likely to have money to spare – but are they also more acutely aware of the dangers emerging in their industry? Tallinn speaks of Silicon Valley’s “culture of heroism”. “The traditional way of having a big impact in the world is taking something that the public thinks is big and trying to back it, like space travel or eradicating diseases,” he said. “Because I don’t have nearly enough resources for backing something like that, I’ve taken an area that’s massively underappreciated and not that well understood by the public.”

I wondered, when I spoke to Price and Tallinn, how big a difference they believed their work can make. It would be naive to imagine that one could ever convince scientists to stop working in a specific field – whether artificial intelligence or the mani­pulation of viruses – simply because it is dangerous. The best you could hope for would be a greater awareness of the risks posed by new technologies, and improved safety measures. You might want to control access to technology (just as we try to limit access to the enriched uranium needed to make nuclear bombs), but couldn’t this turn science into an increasingly elite occupation? Besides, it is hard to control access to technology for ever, and we know that in the modern, interconnected world small hacks can have catastrophic effects. So how great an impact can a few dozen passionate researchers make, spread across a handful of organisations?

“I’m not supremely confident it’s going to make a big difference, but I’m very confident it will make a small difference,” Price said. “And, given that we’re dealing with huge potential costs, I think it’s worth making a small difference because it’s like putting on a seat belt: it’s worth making a small effort because there’s a lot at stake.”

Tallinn was more upbeat. “There’s a saying in the community: ‘Shut up and multiply’ – just do the calculations,” he said. “Sometimes I joke when there’s particularly good news in this ecosystem, like when I’ve had a good phone call with someone, that ‘OK, that’s another billion saved’.

“Being born into a moment when the fate of the universe is at stake is a lot of fun.” 

Sophie McBain is an assistant editor of the New Statesman 

Sophie McBain is a freelance writer based in Cairo. She was previously an assistant editor at the New Statesman.

This article first appeared in the 17 September 2014 issue of the New Statesman, Scotland: What Next?

Fox via YouTube
Show Hide image

Are smart toys spying on children?

If you thought stepping on a Lego was bad, consider the new ways in which toys can hurt and harm families.

In January 1999, the president of Tiger Electronics, Roger Shiffman, was forced to issue a statement clearing the name of the company’s hottest new toy. “Furby is not a spy,” he announced to the waiting world.

Shiffman was speaking out after America’s National Security Agency (NSA) banned the toy from its premises. The ban was its response to a playground rumour that Furbies could be taught to speak, and therefore could record and repeat human speech. “The NSA did not do their homework,” said Shiffman at the time.

But if America’s security agencies are still in the habit of banning toys that can record, spy, and store private information, then the list of contraband items must be getting exceptionally long. Nearly 18 years after TE were forced to deny Furby’s secret agent credentials, EU and US consumer watchdogs are filing complaints about a number of WiFi and Bluetooth connected interactive toys, also known as smart toys, which have hit the shelves. Equipped with microphones and an internet connection, many have the power to invade both children’s and adults’ private lives.

***

“We wanted a smart toy that could learn and grow with a child,” says JP Benini, the co-founder of the CogniToys “Dino”, an interactive WiFi-enabled plastic dinosaur that can hold conversations with children and answer their questions. Benini and his team won the 2014 Watson Mobile Developer Challenge, allowing them to use the question-answering software IBM Watson to develop the Dino. As such, unlike the “interactive” toys of the Nineties and Noughties, Dino doesn’t simply reiterate a host of pre-recorded stock phrases, but has real, organic conversations. “We grew it from something that was like a Siri for kids to something that was more conversational in nature.”

In order for this to work, Dino has a speaker in one nostril and a microphone in the other, and once a child presses the button on his belly, everything they say is processed by the internet-connected toy. The audio files are turned into statistical data and transcripts, which are then anonymised and encrypted. Most of this data is, in Benini’s words, “tossed out”, but his company, Elemental Path, which owns CogniToys, do store statistical data about a child, which they call “Play Data”. “We keep pieces from the interaction, not the full interaction itself,” he tells me.

“Play Data” are things like a child’s favourite colour or sport, which are used to make a profile of the child. This data is then available for the company to view, use, and pass on to third parties, and for parents to see on a “Parental Panel”. For example, if a child tells Dino their favourite colour is “red”, their mother or father will be able to see this on their app, and Elemental Path will be able to use this information to, Benini says, “make a better toy”.

Currently, the company has no plans to use the data with any external marketers, though it is becoming more and more common for smart toys to store and sell data about how they are played with. “This isn’t meant to be just another monitoring device that's using the information that it gathers to sell it back to its user,” says Benini.

Sometimes, however, Elemental Path does save, store, and use the raw audio files of what a child has said to the toy. “If the Dino is asked a question that it doesn’t know, we take that question and separate it from the actual child that’s asking it and it goes into this giant bucket of unresolved questions and we can analyse that over time,” says Benini. It is worth noting, however, that Amazon reviews of the toy claim it is frequently unable to answer questions, meaning there is potentially an abundance of audio saved, rather than it being an occasional occurrence.

CogniToys have a relatively transparent Privacy Policy on their website, and it is clear that Benini has considered privacy at length. He admits that the company has been back and forth about how much data to store, originally offering parents the opportunity to see full transcripts of what their child had been saying, until many fed back that they found this “creepy”. Dino is not the first smart toy to be criticised in this way.

Hello Barbie is the world’s first interactive Barbie doll, and when it was released by Mattel in 2015, it was met with scorn by parents’ rights groups and privacy campaigners. Like Dino, the doll holds conversations with children and stores data about them which it passes back to the parents, and articles expressing concerns about the toy featured on CNN, the Guardian, and the New York Times. Despite Dino’s similarities, however, Benini’s toy received almost no negative attention, while Hello Barbie won the Campaign for a Commercial-Free Childhood’s prize for worst toy of the year 2015.

“We were lucky with that one,” he says, “Like the whole story of the early bird gets the worm but the second worm doesn’t get eaten. Coming second on all of this allowed us to be prepared to address the privacy concerns in greater depth.”

Nonetheless, Dino is in many ways essentially the same as Hello Barbie. Both toys allow companies and parents to spy on children’s private playtimes, and while the former might seem more troubling, the latter is not without its problems. A feature on the Parental Panel of the Dino also allows parents to see the exact wording of questions children have asked about certain difficult topics, such as sex or bullying. In many ways, this is the modern equivalent of a parent reading their child's diary. 

“Giving parents the opportunity to side-step their basic responsibility of talking to, engaging with, encouraging and reassuring their child is a terrifying glimpse into a society where plastic dinosaurs rule and humans are little more than machines providing the babies for the reptile robots to nurture,” says Renate Samson, the chief executive of privacy campaign group Big Brother Watch. “We are used to technology providing convenience in our lives to the detriment of our privacy, but allowing your child to be taught, consoled and even told to meditate by a WiFi connected talking dinosaur really is a step in the wrong direction.”

***

Toy companies and parents are one thing, however, and to many it might seem trivial for a child’s privacy to be comprised in this way. Yet many smart toys are also vulnerable to hackers, meaning security and privacy are under threat in a much more direct way. Ken Munro, of Pen Test Partners, is an ethical hacker who exposed security flaws in the interactive smart toy “My Friend Cayla” by making her say, among other things, “Calm down or I will kick the shit out of you.”

“We just thought ‘Wow’, the opportunity to get a talking doll to swear was too good,” he says. “It was the kid in me. But there were deeper concerns.”

Munro explains that any device could connect to the doll over Bluetooth, provided it was in range, as the set-up didn’t require a pin or password. He also found issues with the encryption processes used by the company. “You can say anything to a child through the doll because there's no security,” he says. “That means you've got a device that can potentially be used to groom a child and that's really creepy.”

Pen Test Partners tells companies about the flaws they find with their products in a process they call “responsible disclosure”. Most of the time, companies are grateful for the information, and work through ways to fix the problem. Munro feels that Vivid Toy Group, the company behind Cayla, did a “poor job” at fixing the issue. “All they did was put one more step in the process of getting it to swear for us.”

It is one thing for a hacker to speak to a child through a toy and another for them to hear them. Early this year, a hack on baby monitors ignited such concerns. But any toy with speech recognition that is connected to the internet is also vulnerable to being hacked. The data that is stored about how children play with smart toys is also under threat, as Fisher Price found out this year when a security company managed to obtain the names, ages, birthdays, and genders of children who had played with its smart toys. In 2015, VTech also admitted that five million of its customers had their data breached in a hack.

“The idea that your child shares their playtime with a device which could potentially be hacked, leaving your child’s inane or maybe intimate and revealing questions exposed is profoundly worrying,” says Samson. Today, the US Electronic Privacy Information Center (EPIC) said in a statement that smart toys “pose an imminent and immediate threat to the safety and security of children in the United States”. 

Munro says big brands are usually great at tackling these issues, but warns about smaller, cheaper brands who have less to lose than companies like Disney or Fisher Price. “I’m not saying they get it right but if someone does find a problem they’ve got a huge incentive to get it right subsequently,” he says of larger companies. Thankfully, Munro says that he found Dino to be secure. “I would be happy for my kids to play with it,” he says. “We did find a couple of bugs but we had a chat with them and they’re a good bunch. They aren’t perfect but I think they’ve done a hell of a lot of a better job than some other smart toy vendors.”

Benini appears alert to security and the credibility it gives his company. “We took the security very, very seriously,” he says. “We were still building our systems whilst these horror stories were coming about so I already set pipelines and parameters in place. With a lot of devices out there it seems that security takes a backseat to the idea, which is really unfortunate when you’re inviting these devices into your home.”

As well as being wary of smaller brands, Munro advises that parents should look out for Bluetooth toys without a secure pairing process (ie. any device can pair with the toy if near enough), and to think twice about which toys you connect to your WiFi. He also advises to use unique passwords for toys and their corresponding apps.

“You might think ‘It's just a toy, so I can use the same password I put in everything else’ – dog’s name, football club, whatever – but actually if that ever got hacked you’d end up getting all your accounts that use that same password hacked,” he says.

Despite his security advice, Munro describes himself as “on the fence” about internet-connected smart toys as a whole. “Most internet of things devices can be hacked in one way or another,” he says. “I would urge caution.”

***

Is all of this legal? Companies might not be doing enough ethically to protect the privacy of children, but are they acting responsibly within the confines of the law?

Benini explains that Dino complies with the United States Children's Online Privacy Protection Act (COPPA) of which there is no real equivalent in the UK. COPPA says that companies must have parental permission to collect personal information over the internet about children under 13 years of age. “We’ve tried to go above and beyond the original layout of COPPA,” says Benini, when describing CogniToys transparent privacy documents. Parents give their consent for Elemental Path to collect their children’s data when they download the app that pairs with the toy.

Dino bears a striking similarity to Amazon Echo and Google Home, smart speakers that listen out for commands and questions in your home. Everything that is said to Amazon Echo is recorded and sent to the cloud, and an investigation by the Guardian earlier this year discovered that this does not comply with COPPA. We are therefore now in a strange position whereby many internet of things home devices are legally considered a threat to a child’s privacy, whereas toys with the same capabilities are not. This is an issue because many parents may not actually be aware that they are handing over their children’s data when installing a new toy.

As of today, EU consumer rights groups are also launching complaints against certain smart toys, claiming they breach the EU Unfair Contract Terms Directive and the EU Data Protection Directive, as well as potentially the Toy Safety Directive. Though smart toys may be better regulated in Europe, there are no signs that the problem is being tackled in the UK. 

At a time when the UK government are implementing unprecedented measures to survey its citizens on the internet and Jeremy Hunt wants companies to scour teens’ phones for sexts, it seems unlikely that any legislation will be enacted that protects children’s privacy from being violated by toy companies. Indeed, many internet of things companies – including Elemental Path – admit they will hand over your data to government and law enforcement officials when asked.

***

As smart toys develop, the threat they pose to children only becomes greater. The inclusion of sensors and cameras means even more data can be collected about children, and their privacy can and will be compromised in worrying ways.

Companies, hackers, and even parents are denying children their individual right to privacy and private play. “Children need to feel that they can play in their own place,” says Samson. It is worrying to set a precedent where children get used to surveillance early on. All of this is to say nothing of the educational problems of owning a toy that will tell you (rather than teach you) how to spell “space” and figure out “5+8”.

In a 1999 episode of The Simpsons, “Grift of the Magi”, a toy company takes over Springfield Elementary and spies on children in order to create the perfect toy, Funzo. It is designed to destroy all other toys, just in time for Christmas. Many at the time criticised the plot for being absurd. Like the show's prediction of President Trump, however, it seems that we are living in a world where satire slowly becomes reality.

Amelia Tait is a technology and digital culture writer at the New Statesman.