The New Statesman Essay - The love of a robot

Is it really possible, as a new film suggests, that artificial intelligence like David (from <em>AI:

In Supertoys, a trilogy of sci-fi short stories by Brian Aldiss, Monica tries to love her little boy, David, while David - poor, unhappy soul whose only true companion is his Teddy - writes pathetic, unfinished notes to try to convince her of his love. Neither succeeds. Monica is a real, flesh-and-blood woman who is married to Henry, the managing director of Synthank, and David is one of Henry's creations: a very fine - indeed, quite wonderful - robot, designed as a child substitute in an overpopulated world where genuine procreation is rationed.

This poignant story is the basis for Steven Spielberg's film AI: Artificial Intelligence, released in the UK later this month. Aldiss writes sci-fi at its literary and substantive best, and Supertoys has the resonance of myth - specifically, that of Pinocchio. More to the point, Supertoys poses in modern guise the most persistent and elusive of all of philosophy's conundrums: what is mind? Is it mind that makes us human, and is it exclusive to humankind? More broadly, as David says to Teddy (also a quasi-animate robot): "How do you tell what are real things from what aren't real things?"

Stanley Kubrick originally planned to film Supertoys in 1982, but gave up - perhaps, suggests Aldiss, because "my story looks inwards" and so was difficult to put on the screen. It might, in any case, have failed as a film then because, says Aldiss, "no one would have believed it". But, after two more decades of artificial intelligence, factories are "manned" by robots, and computers already beat grandmasters at chess. So what, now, is there to disbelieve? Androids of the David and Teddy class seem on the cards.

Henry's claim, too, that he has "found a way to link computer circuitry with synthetic flesh" has ceased even to be futuristic. Some present-day medical prostheses achieve this. It is already possible to bypass parts of the brain with electronic circuitry - the beginnings of brain- computer chimeras. Today's computers are silicon-based, while biological systems are carbon-based (which is what chemists mean by "organic"), but the twain are meeting. Besides, scientists now contemplate - and, indeed, build - computers based on biological molecules.

When biotechnologists in the 1980s were asked awkward questions about the future, they could duck them with the words "biologically impossible". Cloning, designer babies - these were just the fantasies of scaremongers and sensationalists (notably the unspeakable "media") and need not be taken seriously. I wrote about cloning in the early 1990s (up to a point, it had already been done in frogs) and was told by a well-known London-based professor of embryology that it was "no more likely than a time machine". Dolly followed a couple of years later. In short, the expression "biologically impossible" has lost its force - or, indeed, lost all meaning. Future biotechnologists might encounter insuperable hurdles of a biological nature, but this can no longer be assumed a priori, as it was in the Eighties. The only safe and reasonable assumption now is that any apparent fantasy should be taken seriously, provided only that it does not transgress what Sir Peter Medawar called "the bedrock laws of physics". David the apparently sensitive robot is not implausible. We might reasonably be surprised, in the early 21st century, that anyone should have thought it was. In principle, market forces could bring such a creature into being - even if we regard such a "supertoy" as frivolous, the market for frivolity is huge. Nothing can halt the march of biotech except our own misgivings.

Still, we must ask: is this apparently sensitive android really possessed of sensibility? Aldiss suggests how slippery such issues are in a brief exchange between Henry and David after Monica's death. Henry: "You only think you are happy or sad. You only think you loved Teddy or Monica." David: "Did you love Monica, Daddy?" Henry (sighing): "I thought I did."

Can computers ever really be like us, and if not, why not? The similarities are obvious and beguiling - we can both work out certain problems and apparently engage in dialogue, albeit only of a closed-ended kind, as in chess - but the differences are striking, too. Marvin Minsky, one of the founding fathers of artificial intelligence at the Massachusetts Institute of Technology, confesses that the more he tries to emulate the human brain, the more wonderful he finds it. High-flown arithmetic, of the kind that computers do in passing and that we find so impressive, is conceptually very simple; it's just that human brains are spectacularly bad at it.

Language, on the other hand, is conceptually extremely difficult - laden not simply with "meaning", but with nuance, allusion, connotation, and quite unfettered by plonking logic. Computers can engage in stilted dialogue and even simulate speech, but it will be a very long time indeed before they indulge in metaphor, jokes or slang - the things that human beings manage so effortlessly, and reprimand their children for doing too much.

Yet the differences between human and computer "thinking" do not lie simply in the kinds of things that each is good at. The strategy is different. Computers are relentlessly and dourly logical; they are tolerable to work with only because they do what they do so blindingly fast, processing billions of bits a second. The brains of humans, like those of all animals, are survival machines that use a variety of strategies, of which logic is only one, and not usually dominant. We think our way through life with rules of thumb, making guesses and taking chances based on past successes. Computers would find us intolerable, too, if they had opinions.

Besides, humans do not merely think and solve immediate problems. We have consciousness, whatever that is. We are emotional. Taken all in all, we have "mind". Nobody supposes that present-day computers possess consciousness or feeling, and, with neither, they surely cannot be "mindful".

Many artificial intelligence enthusiasts claim that the differences are only those of complexity. Consciousness is nothing more than the brain looking at itself, thinking about its own thinking. Computers could surely acquire such an ability with suitable circuitry. It may not be a matter simply of making them more intricate; perhaps there must be new computer architecture, with the different parts of the circuit interacting in ways not yet conceived. But time will sort this out. Already, the latest robots have emotion built into them. Without emotion - some sense of excitement about jobs to be done, and satisfaction when they are - they have no motivation at all and remain inert. The human brain, in the end, is an electrical circuit, albeit mediated by chemical transmitters. Why should a silicon-based circuit not emulate a carbon-based circuit, if that is what it is required to do?

The first great modern computer scientist, Alan Turing, said that, in principle, functional computers could be made out of anything. All that matters is the architecture, the relationship between the parts. An abacus, beads on wires, could do the trick in principle. It's just that electrons, whizzing through semi- conductors, are quicker. In brains, electric impulses flow around neurones. Why should the raw materials make a difference?

Turing is too clever to argue with and we must concede that computers can indeed be made of anything at all. But we know that computers, at least of the present day, do not do all that brains do. The more refined aspects of human mental prowess, such as consciousness and "mind", may well be specific properties of the raw materials. The Oxford mathematician Roger Penrose has suggested that perhaps - just perhaps - biological brains work in ways that cannot yet be understood because they partake of physical principles that have not yet been conceived. Physicists acknowledge, after all, that present-day physics has huge lacunae in it - notably between classical physics, as now represented by Einstein's theories of relativity, and quantum mechanics. Both seem true, when rigorously tested in their own terms; and yet they do not always prove compatible with each other. The physics that operates in the brain, so Penrose speculates, lies in what, for the present, is a lacuna.

If he is right, then the substance of which the brain is composed may be extremely important. "Mind" may be a product of physics that is currently unanalysed and which, though manifest in living material, cannot be partaken of through silicon circuits. Perhaps, if this is so, silicon computers can never do more than imitate the human brain. They can never replicate what it does - though it may be possible to build biological circuits that do.

Even more radically, Peter Fenwick, a London-based neuro- psychiatrist, argues that the elusive quality known as "mind" is not, in any serious measure, understood at all. Perhaps our entire view of the universe is wrong, and in particular the notion that consciousness and mind are simply "emergent properties" that swim into being when the chemistry and architecture are appropriately complicated. Perhaps, rather, "mind" is a fundamental quality of the universe itself, which human minds tap into. Many a Buddhist or Christian mystic would warm to such a concept - which is no reason to suppose it is wrong.

There is one final, essentially sociological twist, which Aldiss does not touch upon directly, though other sci-fi writers have done so. Unless we believe, as William Paley did in the late 18th century, in the literal Creator God who made human beings in the same way that human beings make pocket watches, then we have to conclude that the human brain is not designed at all. It evolved by natural selection. Evolved systems have tremendous strengths. They encapsulate solutions to all the problems that have been posed by the environment over many millions of years. Those problems are more various and devious than any mere designer could envisage; and the systems that evolve to cope with them are more intricate and subtle than any designer could conceive. "What a piece of work is a man," said Hamlet and, as Minsky acknowledges, he was dead right.

But evolved systems have their weaknesses, too. Natural selection is opportunist, but not creative. Each new generation is limited in materials and form by what was available to the generation before. It cannot simply seize what it needs from the surroundings, as a designer can. Hence the solutions to the problems posed by life often have a rough-and-ready quality. Solutions to obsolete problems remain as vestiges, like gooseflesh and the human appendix. Evolved systems, in short, are full of caprice and redundancy. This means they cannot exhaustively be understood. After all, a prime way to understand how living things work is by "reverse engineering": looking at what they do, and then inferring the problems they are solving. But the problems they are really solving may be hidden deep in their history. It's not like reverse-engineering an enemy plane that has crash-landed in your back garden.

Computers, however, are designed - and the process of designing has strengths and weaknesses of its own. The strength is in the flexibility: when designers make a mistake, they can go back to the drawing board, which natural selection can never do. The weakness is that the problems that need to be solved cannot be predicted completely in advance. In practice, as Richard Webb of the Centre for Philosophy at the London School of Economics points out, artefacts, once designed, are further refined by natural selection within the market place: consumers discover their weaknesses and find out what they can really do. Artefacts intended for one purpose often succeed, as animals do, by applying themselves to something completely different. Future computers will design themselves and, however much we may initially make them in our image, they will increasingly grow away from us.

Machines are innately capricious, too. As soon as computer programs become even a little complex, it becomes theoretically impossible to predict, exhaustively, all that they are capable of. For such reasons, fly-by-wire aircraft occasionally fall out of the sky, as some quirk of programming is revealed that had not been anticipated. David and his ilk will be unpredictable in spades. The social relationships between unpredictable human beings and advanced, innately unpredictable robots are beyond guessing. The cosy, master- servant compact that is generally envisaged would be precarious indeed.

The American philosopher of science Thomas Kuhn proposed in the early 1960s that science does not progress in steady, logical steps, as conventionally envisaged. Instead, it lurches. At any one time, scientists subscribe to a common world-view that Kuhn called the "paradigm".

The paradigm of physics before the 20th century was that of Newtonian mechanics, and now it is relativity and quantum mechanics. The paradigm of modern biology is a synthesis of Darwin's natural selection with Mendel's genetics, updated by molecular biology. But, over time, all paradigms are liable to break down, as data accumulate that can no longer be accommodated. Then, said Kuhn, there is a "paradigm shift".

The less imaginative scientists assume that all outstanding questions can be answered within their existing paradigm, that more of the same researches will provide whatever answers are lacking. The great scientists, however, think beyond the paradigm. Historians tend to argue that Newton gave up experimental physics in the late 17th century because he ran out of ideas. Surely, though, he ran out of physics: he knew that his mechanics was not adequate, but he also knew that 17th-century data and maths could not lead to better understanding. He needed to drop out for 150 years, and return to converse with Faraday and Maxwell.

Today's physicists, it has been suggested, may face the same problem: they have developed the idea of "superstrings", as the most fundamental of all fundamental entities in the universe, but they may need 23rd-century maths to understand them. This surely is the case also with the problems of mind and consciousness, and of whether computers can truly partake of them. At bottom, the issue is empirical, not to be answered a priori, and we just don't have the data or the means of thinking about what we do have. Penrose and Fenwick are saying different things, but they both imply that to understand the human brain we need a new paradigm. We should not assume that it will simply extend the present one, which involves neurology and pharmacology. It may well include new physics, or elements of eastern mysticism.

The next few centuries will surely bring us supertoys. They will also bring insights. Whether they bring the enlightenment we seek remains to be seen.

Colin Tudge's latest book, In Mendel's Footnotes, is published in paperback by Jonathan Cape (£7.99)