"In science, you've got to go against what the elders are saying"

The string theorist Brian Greene has grown from maths prodigy to physics iconoclast. Now he hopes to

As a child, Brian Greene interpreted the story of Icarus differently to most people. "In my naivety, I thought that it was a story about a boy who was bucking authority, not doing what his father said and yet he was paying the ultimate price," he says. "As I got older and became a scientist, it seemed more off-base, because in order to have great breakthroughs in science, you've got to go against what the elders are saying."

Greene has spent his career as a physics professor doing exactly that, exploring the wild frontiers of superstring theory: an unproven, untested and possibly untestable outcrop of theoretical physics. It's an attempt to resolve a conundrum: that we have working explanations of the universe on a grand scale (Einstein's general relativity) and the subatomic scale (quantum mechanics) but no one can reconcile the two. String theory tries to provide a "theory of everything" by suggesting that all matter is, at its smallest level, made of one-dimensional, vibrating loops, whose oscillation patterns determine their mass and "flavour".

For more than a decade, this 48-year-old vegan has been its most compelling advocate. As he wrote in 1999, "String theory has the potential to show that all of the wondrous happenings in the universe -- from the frantic dance of subatomic quarks to the stately waltz of orbiting binary stars; from the primordial fireball of the big bang to the majestic swirl of heavenly galaxies -- are reflections of one, grand physical principle, one master equation."

Greene now lives in upstate New York but he was born in what was, in 1963, a rough district of Manhattan. His father, Alan, a high-school drop-out who became a professional musician and composer, spotted his son's precocious mathematical ability when he was just five and set him to work multiplying 30-digit numbers on huge sheets of construction paper. When that began to pall, he asked the young Brian to calculate the number of inches between the earth and the Andromeda galaxy. "That is a very straightforward calculation," he tells me now, sitting in the tea room of a London hotel, "because people know how far away it is in light years. Then you need to convert light years into miles, miles into feet and feet into inches."

His mother has always been less impressed by what he does. "My mom says: 'Why aren't you a doctor?' and I'm like, 'I am a doctor!' and she's all, 'No, I mean a real doctor.' She reads my books but she says they give her a headache."

His run-down school ran out of things to teach him when he was 11, so one of the staff sent him knocking on the doors of the graduate students at Columbia University, bearing a note: "Take this kid on, he's hungry to learn." Thankfully, one of them did. "For no money," he points out. "Because we didn't have any money. He just did this for the love of learning."

Time travel

It is fitting that Greene is now a professor at Columbia and co-director of its Institute for Strings, Cosmology and Astroparticle Physics. Every few years, he gives what he calls a "report from the trenches" of cutting-edge theoretical physics. In 1999, he wrote an introduction to the subject, The Elegant Universe, followed in 2004 by a book on space-time and the nature of reality. This year, it's parallel universes.

His latest book, The Hidden Reality, suggests that our universe could be one of many, "like slices of bread in a cosmic loaf" or "one expanding bubble in a grand, cosmic bubble bath". He explains the idea of a literal "fabric" of space-time by telling me that a spinning black hole exerts a drag on the space around it, "like a pebble in a vat of molasses -- as the pebble spins, the molasses spins with it". His relaxed, metaphorical prose style has got him into trouble before. One reviewer complained that he "indulge[d] in a pandering sort of lyricism", but of greater concern to Greene were those who read his clear explanations and then turned up at his graduate class expecting to understand the content. One man spent ten years in his basement trying to take his first book's ideas to the next level. "He wrote how his wife almost left him because he wouldn't come out of the basement," Greene tells me. "It was heartbreaking."

Asked to name his scientific hero, he picks Albert Einstein, along with Edward Witten, a Princeton physicist. At the start of the 20th century, Einstein overturned the principles of physics by rejecting Isaac Newton's theory of gravity because it conflicted with his discovery that nothing travels faster than the speed of light. "So many of us," Greene says, "revere [Einstein] but it needs to be said -- because I've seen it reported in an odd way -- that we don't revere Einstein like some gurus of New Age cults may be revered, or some religious leaders. We are constantly critical of everyone's contributions, even Witten's. We look at a given paper, we bang it around, knock it, try to break it."

The same goes for string theory, which could turn out to be completely wrong. "It's a highly speculative subject but I don't shrink from that," he says. "If you ask me: 'Do I believe in string theory?' The answer is: no, I don't. I don't believe anything until it is experimentally proven [and] observationally confirmed."

How would he feel if it turned out to be a blind alley? His answer is surprising. "I would be thrilled." He explains: "I don't mean that in an off-handed way. My emotional investment is in finding truth. If string theory is wrong, I'd like to have known that yesterday. But if we can show it today or tomorrow, fantastic . . . It would allow us to focus our attention on approaches that have a better chance of revealing truth."

This isn't a discipline for the faint-hearted. When Greene was studying for his PhD at Oxford in the 1980s, he was tackling one of the fundamental ideas required to make the maths of string theory work: that there are more than three spatial dimensions. "Our eyes only see the big dimensions but beyond those there are others that escape detection because they are so small," he says. "Yet the exact shape of the extra dimensions has a profound effect on things that we can see, like what the electron weighs, its mass, the strength of gravity."

When he began his doctoral research, there were five possible shapes, one of which he ruled out by mathematical analysis. "The problem was, when I turned back to the list of shapes to look at the second, the list had grown. It was 100. Then 1,000, then 10,000. Ten thousand is still potentially doable -- it would keep an army of graduate students busy for a while -- but, nowadays, it has reached ten to the power of 500, which is an unimaginably huge number; the number of the particles in the observable universe is about ten to the power of 80."

Faced with this abundance, some physicists have decided to abandon the search, while others (including Greene) are trying to find equations to narrow down the field. A third group has a more radical proposal. "Those physicists have said we should take seriously the failure to pick out one shape from the many, because maybe that's telling us there is no unique shape. Maybe the maths is telling us that there are many universes and in each universe one of those shapes is in the limelight."

Mind the gap

Physicists can be an iconoclastic bunch but is there not a danger that their conviction gives fuel to the climate sceptics and creationists who say that science is a belief system, too? "Science is a self-correcting discipline that can, in subsequent generations, show that previous ideas were not correct," Greene counters. "When it comes to climate change . . . [and] the preponderance of data is pointing in a given direction, your confidence needs to rise proportionate to that. The data is very convincing."

He also has trenchant views about religious belief. "My view is that science only has something to say about a very particular notion of God, which goes by the name of 'god of the gaps'. If science hasn't given an explanation for some phenomenon, you could step back and say, 'Oh, that's God.' Then, when science does explain that phenomenon -- as it eventually does -- God gets squeezed out. I think the appropriate response for a physicist is: 'I do not find the concept of God very interesting, because I cannot test it.'"

Before I leave, I raise the idea of the "infinite multiverse", where every possible outcome of an event spins off a different universe. Dropped your piece of toast, buttered side down? There's now a universe where the opposite happened and you didn't have to scrape the fluff off your breakfast. It's one way of dealing with the fact that although a given outcome might have 30 per cent probability, and another might have 70 per cent, nowhere in the laws of physics is there a reason why one happens and not the other.

Doesn't that render the idea of free will redundant? "Yes," he says baldly. "We do not see free will in the equations: you and I are just particles governed by particular laws. Every individual, faced with five choices, would make all five -- one per universe. And all of the choices would be as real as the others." Don't we deserve credit for picking the choice that keeps us in this universe? Greene shakes his head. "Not really, because you are following one trajectory of choices. It is not as though there was a place in the mathematics where your free will dictated that particular set of choices. You are knocked around by the laws of physics, just like all your copies in the other universes."

I look at the preppy professor sitting opposite me drinking a cup of chai and wonder if there is a Brian Greene in another universe who was turned away by every grad student he asked for help. "And joined some gang and just been a street thug?" he says, smiling. "It is possible."

Brian Greene's "The Hidden Reality" is published by Allen Lane (£25)

Helen Lewis-Hasteley is an assistant editor of the New Statesman

Helen Lewis is deputy editor of the New Statesman. She has presented BBC Radio 4’s Week in Westminster and is a regular panellist on BBC1’s Sunday Politics.

This article first appeared in the 06 June 2011 issue of the New Statesman, Are we all doomed?

FRED TOMASELLI/PRIVATE COLLECTION/BRIDGEMAN IMAGES
Show Hide image

How nature created consciousness – and our brains became minds

In From Bacteria to Bach and Back, Daniel C Dennett investigates the evolution of consciousness.

In the preface to his new book, the ­philosopher Daniel Dennett announces proudly that what we are about to read is “the sketch, the backbone, of the best scientific theory to date of how our minds came into existence”. By the end, the reader may consider it more scribble than spine – at least as far as an account of the origins of human consciousness goes. But this is still a superb book about evolution, engineering, information and design. It ranges from neuroscience to nesting birds, from computing theory to jazz, and there is something fascinating on every page.

The term “design” has a bad reputation in biology because it has been co-opted by creationists disguised as theorists of “intelligent design”. Nature is the blind watchmaker (in Richard Dawkins’s phrase), dumbly building remarkable structures through a process of random accretion and winnowing over vast spans of time. Nonetheless, Dennett argues stylishly, asking “design” questions about evolution shouldn’t be ­taboo, because “biology is reverse engin­eering”: asking what some phenomenon or structure is for is an excellent way to understand how it might have arisen.

Just as in nature there is design without a designer, so in many natural phenomena we can observe what Dennett calls “competence without comprehension”. Evolution does not understand nightingales, but it builds them; your immune system does not understand disease. Termites do not build their mounds according to blueprints, and yet the results are remarkably complex: reminiscent in one case, as Dennett notes, of Gaudí’s church the Sagrada Família. In general, evolution and its living products are saturated with competence without comprehension, with “unintelligent design”.

The question, therefore, is twofold. Why did “intelligent design” of the kind human beings exhibit – by building robotic cars or writing books – come about at all, if unintelligent design yields such impressive results? And how did the unintelligent-design process of evolution ever build intelligent designers like us in the first place? In sum, how did nature get from bacteria to Bach?

Dennett’s answer depends on memes – self-replicating units of cultural evolution, metaphorical viruses of the mind. Today we mostly use “meme” to mean something that is shared on social media, but in Richard Dawkins’s original formulation of the idea, a meme can be anything that is culturally transmitted and undergoes change: melodies, ideas, clothing fashions, ways of building pots, and so forth. Some might say that the only good example of a meme is the very idea of a meme, given that it has replicated efficiently over the years despite being of no use whatsoever to its hosts. (The biologist Stephen Jay Gould, for one, didn’t believe in memes.) But Dennett thinks that memes add something important to discussions of “cultural evolution” (a contested idea in its own right) that is not captured by established disciplines such as history or sociology.

The memes Dennett has in mind here are words: after all, they reproduce, with variation, in a changing environment (the mind of a host). Somehow, early vocalisations in our species became standardised as words. They acquired usefulness and meaning, and so, gradually, their use spread. Eventually, words became the tools that enabled our brains to reflect on what they were ­doing, thus bootstrapping themselves into full consciousness. The “meme invasion”, as Dennett puts it, “turned our brains into minds”. The idea that language had a critical role to play in the development of human consciousness is very plausible and not, in broad outline, new. The question is how much Dennett’s version leaves to explain.

Before the reader arrives at that crux, there are many useful philosophical interludes: on different senses of “why” (why as in “how come?” against why as in “what for?”), or in the “strange inversions of reasoning” offered by Darwin (the notion that competence does not require comprehension), Alan Turing (that a perfect computing machine need not know what arithmetic is) and David Hume (that causation is a projection of our minds and not something we perceive directly). Dennett suggests that the era of intelligent design may be coming to an end; after all, our best AIs, such as the ­AlphaGo program (which beat the human European champion of the boardgame Go 5-0 in a 2015 match), are these days created as learning systems that will teach themselves what to do. But our sunny and convivial host is not as worried as some about an imminent takeover by intelligent machines; the more pressing problem, he argues persuasively, is that we usually trust computerised systems to an extent they don’t deserve. His final call for critical thinking tools to be made widely available is timely and admirable. What remains puzzlingly vague to the end, however, is whether Dennett actually thinks human consciousness – the entire book’s explanandum – is real; and even what exactly he means by the term.

Dennett’s 1991 book, Consciousness Explained, seemed to some people to deny the existence of consciousness at all, so waggish critics retitled it Consciousness Explained Away. Yet it was never quite clear just what Dennett was claiming didn’t exist. In this new book, confusion persists, owing to his reluctance to define his terms. When he says “consciousness” he appears to mean reflective self-consciousness (I am aware that I am aware), whereas many other philosophers use “consciousness” to mean ordinary awareness, or experience. There ensues much sparring with straw men, as when he ridicules thinkers who assume that gorillas, say, have consciousness. They almost certainly don’t in his sense, and they almost certainly do in his opponents’ sense. (A gorilla, we may be pretty confident, has experience in the way that a volcano or a cloud does not.)

More unnecessary confusion, in which one begins to suspect Dennett takes a polemical delight, arises from his continued use of the term “illusion”. Consciousness, he has long said, is an illusion: we think we have it, but we don’t. But what is it that we are fooled into believing in? It can’t be experience itself: as the philosopher Galen Strawson has pointed out, the claim that I only seem to have experience presupposes that I really am having experience – the experience of there seeming to be something. And throughout this book, Dennett’s language implies that he thinks consciousness is real: he refers to “conscious thinking in H[omo] sapiens”, to people’s “private thoughts and experiences”, to our “proper minds, enculturated minds full of thinking tools”, and to “a ‘rich mental life’ in the sense of a conscious life like ours”.

The way in which this conscious life is allegedly illusory is finally explained in terms of a “user illusion”, such as the desktop on a computer operating system. We move files around on our screen desktop, but the way the computer works under the hood bears no relation to these pictorial metaphors. Similarly, Dennett writes, we think we are consistent “selves”, able to perceive the world as it is directly, and acting for rational reasons. But by far the bulk of what is going on in the brain is unconscious, ­low-level processing by neurons, to which we have no access. Therefore we are stuck at an ­“illusory” level, incapable of experiencing how our brains work.

This picture of our conscious mind is rather like Freud’s ego, precariously balan­ced atop a seething unconscious with an entirely different agenda. Dennett explains wonderfully what we now know, or at least compellingly theorise, about how much unconscious guessing, prediction and logical inference is done by our brains to produce even a very simple experience such as seeing a table. Still, to call our normal experience of things an “illusion” is, arguably, to privilege one level of explanation arbitrarily over another. If you ask me what is happening on my computer at the moment, I shall reply that I am writing a book review on a word processor. If I embarked instead on a description of electrical impulses running through the CPU, you would think I was being sarcastically obtuse. The normal answer is perfectly true. It’s also true that I am currently seeing my laptop screen even as this experience depends on innumerable neural processes of guessing and reconstruction.

The upshot is that, by the end of this brilliant book, the one thing that hasn’t been explained is consciousness. How does first-person experience – the experience you are having now, reading these words – arise from the electrochemical interactions of neurons? No one has even the beginnings of a plausible theory, which is why the question has been called the “Hard Problem”. Dennett’s story is that human consciousness arose because our brains were colonised by word-memes; but how did that do the trick? No explanation is forthcoming. Dennett likes to say the Hard Problem just doesn’t exist, but ignoring it won’t make it go away – even if, as his own book demonstrates, you can ignore it and still do a lot of deep and fascinating thinking about human beings and our place in nature.

Steven Poole’s books include “Rethink: the Surprising History of New Ideas” (Random House Books)

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times