Get the beers in

Robot: the future of flesh and machines

Rodney A Brooks <em>Allen Lane, The Penguin Press, 262pp,

We are at the dawn of a new age. "My thesis," Rodney Brooks writes, "is that in 20 years the boundary between fantasy and reality will be rent asunder." Lest this put you in mind of a state-administered hallucinogenic programme, it should be noted that the fantasy in question is that of robots: intelligent, ambulatory machines. Marvin the Paranoid Android, C-3PO and the Terminator may soon walk the earth. "The coming robotics revolution," Brooks continues breathlessly, "will change the fundamental nature of our society."

That's a large claim. With a sort of charming predictability, Brooks fails to back it up. The first half of his book is an entertaining sprint through the history of automata, as well as the story of Brooks's own odyssey from Australian electronics geek to student at Stanford in the US, and finally director of the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. On the way, this rather self-dramatising maverick builds a lot of robots. There is Genghis, a six-legged insectoid that crawls inexorably across terrain. There is Kismet, a robot with a face reminiscent of Japanese manga, which spouts nonsense words and makes convincing eye contact with humans. And there are numerous shameless plugs for Brooks's commercial robot-manufacturing company, iRobot (named after Isaac Asimov's seminal short stories of the 1950s), which is currently developing machines for the US military that can engage in remote surveillance missions and climb walls like geckos.

The second, and more troublesome, half of the book deals with the future of robots and their ontological status. Could machines ever really think? Brooks's treatment of these issues is sullied by sloppy language. First, he is disinclined to separate simulation from actuality, happily stating that he built "emotional robots" when he built robots that mimicked the physical expressions of emotion. More importantly, he never properly distinguishes between computation, understanding and reasoning, which leads him into some confused attacks on thinkers who are philosophically more rigorous, such as John Searle. (Brooks's claim that computerised translation systems actually "understand" language in any meaningful sense is just false.) But the skeleton of his argument seems to be as follows.

Consider the human body. That's just a biomechanical machine, right? And it thinks. So other machines, fashioned from titanium and silicon, might one day be able to think, too. Brooks says: "We are much like the robot Genghis, although somewhat more complex in quantity but not in quality." This is what we might call the critical-mass argument: if you get enough interconnections going, then at some (undefined) critical mass of complexity, it will no longer be a dumb system. Consciousness will be magicked out of nowhere. Brooks continues with this critical-mass argument for a while, berating the physicist Roger Penrose, among others, for refusing to accept that consciousness is "just . . . the result of simple mindless activities coupled together".

Then, all of a sudden, Brooks claims that the critical-mass argument isn't true after all. Instead, there must be some "juice" that defines the difference between computerised and living systems. What flavour of juice is this? Brooks waves his hands. The "juice" is just some obvious mathematical trick we haven't yet noticed. What is not likely to be needed, he insists, is a revolution in our scientific world-view. Yet given the history of science, and the extreme unlikelihood that we would be living at a time when human understanding had basically got the universe sewn up, bar a few loose ends, a scientific revolution is just what we should expect to happen before we understand consciousness.

But why is anyone interested in developing conscious robots, as opposed to dumb robots for industrial and military applications? Brooks's position seems to be that artificial intelligence can progress only when an AI system is embodied and situated in the physical world as a robot. It seems strange, then, that Brooks's own robots are somewhat less clever than insects, and far less cognitively impressive than the AI systems in many video games.

The sci-fi dream of intelligent talking machines seems as far away as ever. The problem is that a killer app - a compelling use for consumer robots - has not yet been found that would drive mass-market research and production. The best Brooks can offer is a house-cleaning scenario. You'd buy a bunch of cheap and stupid robots, which would randomly career around your carpets picking up dust. After enough time, the whole carpet would be clean. Uh, great. It looks as if human beings will be better (and cheaper) cleaners for a long while yet.

The question Brooks never answers satisfactorily seems, finally, to be the most important: why would anyone really want a robot? There's only one reason. The word "robot" itself, though Brooks never mentions it, is taken from the Czech for "forced labour". Robot, get me a beer. Robot, go buy the newspaper. The ideal function of a robot is to be an uncomplaining slave. And in that case, professors of artificial intelligence notwithstanding, perhaps it would be better if we didn't make them conscious of their lot at all.

Steven Poole is the author of Trigger Happy (Fourth Estate, £7.99)