Is there a Hard Problem? Photo: Flickr/A Health Blog
Show Hide image

A hard problem for soft brains: is there a Hard Problem?

Daniel Dennett wants to convince Tom Stoppard that there is no Hard Problem.

Oh, to have been a fly on the wall when the philosopher Daniel Dennett chatted with Tom Stoppard. The conversation took place after a performance of Stoppard’s new play about consciousness, The Hard Problem. A few days earlier, Dennett had told an audience at the Royal Institution (RI) that there is no “Hard Problem”.

The play’s name comes from the label that the Australian cognitive scientist David Chalmers gives to the task of understanding consciousness. This is hard, he says, because no physical phenomena will ever be found to account for the emergence of conscious experience. It is a statement of faith but one that has garnered plenty of support and clearly caught Stoppard’s attention.

Consciousness is a tough nut to crack. Scientists aren’t sure how to define it and they don’t know how it – whatever “it” is – emerges from the squidgy, biological matter of the brain. Somehow, billions of neurons connect and give us the ability to sense the outside world and have what we describe as “feelings” about our experience.

To Stoppard, consciousness is an almost supernatural phenomenon – something beyond the reach of science. His play suggests that those who indulge in spiritual beliefs might be more successful in hunting down the root of consciousness, as if consciousness inhabited some realm beyond physics, chemistry and biology.

Dennett, on the other hand, thinks that we may have already solved the problem of consciousness with a coterie of small-scale, rather banal explanations. The non-mysterious ways in which the brain creates our sensory experience might be the only ingredients we need to explain how it is that we are aware of feeling something.

He expands on this possibility in his contribution to a new collection of essays at that asks the question: “What scientific idea is ready for retirement?” He chooses the Hard Problem (even though, he says, it isn’t actually a scientific idea) and suggests we should approach all of its difficulties in the same way as scientists approach extrasensory perception and telekinesis: as “figments of the imagination”.

The central issue concerns our trouble with believing in the physicality of things we cannot see or touch. Software, Dennett suggested at the RI, provides a good example. Everyone agrees that software exists and performs tasks that are far from mysterious. But what is it made of? Lines of code written on a piece of paper do nothing. When written into a computer, they become abstract information encoded in the electronic state of silicon chips – we know that they are there but they are transformed. However hard that is to grasp, it doesn’t
make software spiritual or take it beyond analysis.

A word of caution: there is always a danger of interpreting our scientific struggles within a familiar paradigm. Newton discovered his “clockwork heavens” in an age when accurate means of measuring time were the central goal of many scientifically minded colleagues. Einstein’s special relativity, which defines the fundamentals of the universe in terms that reference light and signals, was birthed in the era of the electric telegraph. Neither was the final word.

These days, much of physics and biology focuses on issues of information transfer, probably because computing now plays such a significant role. So it is possible that Dennett’s software analogy is an innocent sleight of hand. It may be that we haven’t yet encountered the paradigm that will allow us to frame a good understanding of consciousness.

That would certainly make consciousness a hard problem to solve right now – but still not the Hard Problem.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 09 April 2015 issue of the New Statesman, The Anniversary Issue 2015

Show Hide image

Welcome to the Uncanny Valley: how creepy robot dogs are on the rise

It’s hard not to feel a little destabilised after watching a robot’s freakishly long limbs open a door. 

If you’re among those devouring the latest season of Charlie Brooker’s dystopian hellscape Black Mirror, you may still be having metallic nightmares of being chased by the freaky robo-dogs of  “Metalhead”. In which case, you maybe unsettled to know that these nightmares could in theory become a reality (in the distant future), as a viral video from the robotics firm Boston Dynamics (of backflipping robot fame) revealed earlier this week.

Charmingly titled, “Hey Buddy, can you give me a hand?” a SpotMini, Boston Dynamics’ smallest robot, approaches a door and appears to turn sideways before scampering away. Another SpotMini, fitted with an extending claw-arm, opens the door and lets the first robot scamper through, propping it open to follow. 


The director of “Metalhead”, David Slade, was inspired by these very demonstrations. As he stated in an interview in January, the inspiration for those robotic villains stemmed from none other than Boston Dynamics itself. “Those fucking Boston Dynamics robots are terrifying, so that in itself was enough that we didn’t have to worry about it,” he told IndieWire. 

Beyond its viral value, the SpotMini marks an interesting stage in the development of artificial intelligence and robotics. Being able to open a door has long since been the bar for the development of modern robots, as Matt Simon of WIRED pointed out. With this bar seemingly met – and surpassed – the questions remains as to what’s next.

Boston Dynamics robots seem designed mostly for academic and research purposes. Previously, DARPA, the research and development wing of the US defence department and arguably the birthplace of modern robotics, rejected some of the robots for usage because they were too loud. Now, though, they’re silent.

Even those who were not Black Mirror fans expressed a sense of unease while watching the Boston Dynamics email. Indeed, it’s hard not to feel a little destabilised after watching a robot’s freakishly long limbs open a door, which was previously the domain of, you know, humans and crafty pets. But such feelings of revulsion could have something to do with Masahiro Mori’s “Uncanny Valley” theory, which he first proposed in the 1970s.

The “uncanny valley” could be defined as the dip in emotional response from humans when interacting with a being that is vaguely humanoid. The theory suggests that robots become more appealing as they draw closer to human characteristics – but only up until a certain point. Once that point has been reached, and surpassed, humans then find those robots “uncanny”. Then, as they resemble us even more closely, we find that we grow less repulsed by them. 



While the theory has circulated since the 1970s, a 2005 translation of the paper into English made the concepts more widely accessible, and it has been studied by academics ranging from philosophy to psychology. Despite the term wriggling its way into everyday techspeak, the theory itself is yet to be proven. In 2016, the researchers Mathur & Reichling studied real world robots and humans’ reactions to them, but found overall ambiguous evidence for the existence of the uncanny valley. 

Watching one of the SpotMinis open a door – and then prop it open, like you would – may make our skin crawl for those very reasons. The SpotMini, and even some of Boston Dynamic’s other robots, like the backflipping Atlas, have a weird mix of familiar and unfamiliar characteristics. In the viral video, for example, the way that the armed robot holds open the door resembles an interaction that many of us see everyday.   

That may also have something to do with why this particular robot, which has also been used to wash dishes, has triggered a different reaction to Handle, another robot in the Boston Dynamic litter, which can wheel around faster than any natural organism and perform backflips (complete with an athletic hand raise at the end). Handle's acrobaticism inspires a mixture of fear and awe. Watching SpotMini, whose mannerisms bear a resemblance to a family dog, fumble and open a door, feels a little more familiar, but a little more weird.


There are, of course, real fears about robots that are not driven by TV. The baseline for robo-phobia has long since been that they’re not only coming to take our jobs, but they’ll be better than us at it too. SpotMini is technically very interesting because of how it merges software and hardware. That the two SpotMinis can co-operate paves the way towards teamwork between robots, which has until recently remained a far off prospect.

Robots are already a key function of many military operations. They carry out tasks that are too dangerous to entrust to humans, with more accuracy. Additionally, robots are entering our social spheres - with AI controlled assistants like Alexa, the controversial robot Sophia (she once expressed a desire to destroy humans), or the AELOUS home assistant that was unveiled at a convention in Vegas, which can vacuum and fetch you a beer (and will be retailing later this year).

While there are all kinds of debates within artificial intelligence and robotics about what this means for the field, there could be a greater number of non-technically trained experts interacting with robots, relying on intuition and common sense to frame their interactions. 

That takes the implications of the uncanny valley outside of just theoretical. What kind of robot can we interact with, sans revulsion? Does that mean we can only use them in specific contexts. And do they have to look a certain way? 

As always, there’s the bigger picture to consider too. Boston Dynamics remains spectacularly good at making viral videos that draw attention to its products, which are indubitably marvels of modern engineering. Moreover, lower level sensorimotor skills that an infant develops intuitively – such as, you guessed it, opening a door – are actually far more difficult to programme than high-level displays of intelligence, such as winning a chess game (also known as Moravec's paradox).

So while the robo-dog may be unnerving (and there's a reason for that), our robot overhounds are still a while away. But when fully autonomous and physical robots do eventually proliferate, they'll know how to set themselves free.