Photo: IBM RESEARCH
Show Hide image

How quantum computing will change the world

We are on the cusp of a new era of computing, with Google, IBM and other tech companies using a theory launched by Einstein to build machines capable of solving seemingly impossible tasks.

In 1972, at the age of ten, I spent a week somewhere near Windsor – it’s hazy now – learning how to program a computer. This involved writing out instructions by hand and sending the pages to unseen technicians who converted them into stacks of cards punched with holes. The cards were fed overnight into a device that we were only once taken to see. It filled a room; magnetic tape spooled behind glass panels in big, grey, wardrobe-sized boxes. The next morning, we’d receive a printout of the results and the day would be spent finding the programming faults that had derailed our calculations of pi to the nth decimal place.

There was awed talk of computer experts who worked at an even rawer level of abstraction, compiling programs (no one called it coding then) in the opaque, hieroglyphic notation of “machine code”. Those were the days when you had to work close to the guts of the machine: you thought in terms of central processing units, circuit diagrams, binary logic. If you wanted to play games, you had to write them yourself – by the 1980s, on a BBC Micro or Sinclair ZX Spectrum with less graphical sophistication than an ATM.

I was reminded of those clunky, makeshift early days of public access to computers when, in September, I saw one of IBM’s quantum computers at the company’s research labs in Rüschlikon, a suburb of Zurich. On a hill overlooking Lake Zurich, in the early autumn sunshine, the labs have a laid-back air that is more Californian than Swiss. In the past several decades, they have been the incubator of Nobel Prize-winning scientific innovations. Things grow here that affect the world.

This computer has the improvised appearance of a work in progress. It’s a sturdy metal cylinder the size and shape of a domestic water-heater immersion tank, suspended on a frame of aluminium beams reaching to the ceiling and brought to life by a dense tangle of wires that lead to a bank of off-the-shelf microwave oscillators. The “brain” – the component in which binary ones and zeros of data are crunched from input to output – sits deep inside this leviathan, on a microchip the size of a baby’s fingernail.

The last time I visited IBM’s Zurich centre, in 2012, its head of science and technology, Walter Riess, talked about the company’s plans for an imminent “post-silicon” era, after the silicon-chip technology of today’s computers had reached the physical limits of its ability to offer more computing power. Back then, quantum computing seemed like a far-off and speculative option for meeting that challenge.

Now it’s real. This is what computing felt like in the 1950s, Riess told me in September as he introduced me to the new device. It has become routine to juxtapose images of these room-filling quantum machines with the first prototype digital computers, such as the valve-driven ENIAC (or “Electronic Numerical Integrator and Computer”) at the University of Pennsylvania, used for ballistics calculations by the US military. If this is where quantum computing is now, such pictures imply, just try to imagine what’s coming.

Quantum computing certainly sounds like the future. It’s the technology of choice for sci-fi film-makers who want their artificial intelligence networks to have unlimited potential. But what is it really about, and what might it do?

***

This “quantum information technology” is often presented as more of the same, but better. We have become so accustomed to advances in computing being reflected in slimmer, faster laptops and bigger memories that quantum computing is often envisaged in the same terms. It shouldn’t be.

It represents the first major shift in how computing is done since electronic computing devices were invented in the vacuum-tube-powered, steam-punk 1940s. Digital computers manipulate information encoded in binary form as sequences of ones and zeros; for example, as pulses of electrical current. The circuits contain “logic gates”, which produce binary outputs that depend in well-defined ways on the inputs: a NOT gate, say, simply inverts the input, converting a one to a zero and vice versa.

All the rest is software, whether that involves converting keystrokes or mouse movements into images, or taking numbers and feeding them into an equation to work out the answer. It doesn’t much matter what the hardware is – transistors replaced vacuum tubes in the 1950s and have since been shrunk to far smaller than the size of bacteria – so long as it can perform these transformations on binary data.

Quantum computers are no different, except in one crucial respect. In a conventional (“classical”) computer, one bit of binary data can have one of just two values: one or zero. Think of it as transistors acting like the light switches in your house: they are set to either on or off. But in a quantum computer, these switches, called quantum bits or qubits (pronounced “cue-bits”), have more options, because they are governed by the laws of quantum theory.

This notoriously recondite theory started to take shape at the beginning of the 20th century through the work of Max Planck and Albert Einstein. In the 1920s, scientists such as Werner Heisenberg and Erwin Schrödinger devised mathematical tools to describe the kinds of phenomena that Einstein and Planck had revealed. Quantum mechanics seemed to say that, at the minuscule level of atoms, the world behaves very differently from the classical mechanics that had been used for centuries to describe how objects exist and move. Atoms and subatomic particles, previously envisaged as tiny grains of matter, seemed sometimes to show behaviour associated instead with waves – which are not concentrated at a single point in space but are spread throughout it (think of sound waves, for example).

What’s more, the properties of such quantum objects did not seem to be confined to single, fixed values, in the way that a tossed coin has to be either heads or tails or a glove has to be left- or right-handed. It was as if they could be a mixture of both states at once, called a superposition.

That “as if” is crucial. No one can say what the “true nature” of quantum objects is – for the simple yet perplexing reason that quantum theory doesn’t tell us. It just provides a means of predicting what a measurement we make will reveal. A quantum coin can be placed in a superposition of heads and tails: once it is tossed, either state remains a possible outcome when we look. It’s not that we don’t know which it is until we look; rather, the outcome isn’t fixed, even with the quantum coin lying flat on the table covered by our palm, until we look. Is the coin “both heads and tails” before we look? You could say it that way, but quantum mechanics declines to say anything about it.

Thanks to superposition, qubits can, in effect, encode one and zero at the same time. As a result, quantum computers can represent many more possible states of binary ones and zeros. How many more? A classical bit can represent two states: zero and one. Add a bit (an extra transistor, say) to your computer’s processor and you can encode one more piece of binary information. Yet if a group of qubits are placed in a joint superposition, called an entangled state, each additional qubit doubles the encoding capacity. By the time you get to 300 qubits – as opposed to the billions of classical bits in the dense ranks of transistors in your laptop’s microprocessors – you have 2^300 options. That’s more than the number of atoms in the known universe.

You can only access this huge range of options, though, if all the qubits are mutually dependent: in a collective or “coherent” state, which, crudely speaking, means that if we do something to one of them (say, flip a one to a zero), all the others “feel” it. Generally, this requires all the qubits to be placed and maintained in an entangled state.

ENIAC, one of the world’s first digital computers, at the University of Pennsylvania

The difficulty of making a quantum computer mostly involves making and sustaining these coherent states of many qubits. Quantum effects such as superposition and entanglement are delicate and easily disrupted. The jangling atomic motions caused by heat can wash them away. So, to be coherently entangled, qubits must be cooled to extremely low temperatures – we’re typically talking less than a degree above absolute zero (-273° C) – and kept well isolated from the laboratory environment: that is, from the very equipment used to manipulate and measure them. That’s partly why the IBM quantum computer I saw is so bulky: much of it consists of cooling equipment and insulation from the lab environment.

Because of the fragility of entanglement, it has so far been possible only to create quantum computers with a handful of qubits. With more than a few dozen, keeping them all entangled stretches current quantum technologies to the limit. Even then, the qubits remain in a coherent state for just a fraction of a second. You have only that long to carry out your entire quantum computation, because once the coherence starts to decay, errors infect and derail the calculation.

There’s a consensus that silicon transistors are the best bits for conventional computers, but there’s no such agreement about what to make qubits from. The prototype devices that exist so far outside academic laboratories – at Google, IBM and the Canadian company D-Wave, for example – use miniaturised electronic circuits based on superconducting devices. Here, the quantum behaviour stems from an exotic effect called superconductivity, in which metals at very low temperatures conduct electricity without any resistance. The advantage of such qubits is that the well-developed micro­fabrication technologies used for making silicon circuitry can be adapted to make them on silicon chips, and the qubits can be designed to order.

But other researchers are placing their money on encoding data into the quantum energy states of individual atoms or ions, trapped in an orderly array (such as a simple row) using electric or magnetic fields. One advantage of atomic qubits is that they are all identical. Information can be written into, read out from and manipulated within them using laser beams and microwaves. One leading team, headed by Chris Monroe of the University of Maryland, has created a start-up company called IonQ to get the technology ready for the market.

***

Quantum computers have largely been advertised on the promise that they will be vastly faster at crunching through calculations than even the most powerful of today’s supercomputers. This speed-up – immensely attractive to scientists and analysts solving complex equations or handling massive data sets – was made explicit in 1994 when the American mathematician Peter Shor showed in theory that a computer juggling coherent qubits would be able to factor large numbers much more efficiently than classical computers. Reducing numbers to their simplest factors – decomposing 12 to “two times two times three”, for example – is an exercise in elementary arithmetic, yet it becomes extremely hard for large numbers because there’s no shortcut to trying out all the possible factors in turn. Factorising a 300-digit number would take current supercomputers hundreds of thousands of years, working flat out.

For this reason, a lot of data encryption – such as when your credit card details are sent to verify an online purchase – uses codes based on factors of large numbers, which no known computer can crack. Yet Shor showed that a quantum factorisation algorithm could find factors much more efficiently than a classical one can.

How it does so is hard to say. The usual explanation is that, because of all those entangled qubits, a quantum computer can carry out many calculations in parallel that a classical computer would do only one after the other. The Oxford physicist David Deutsch, who pioneered the theory of quantum computing in the 1980s, argues that the quantum computer must be regarded as lots of classical computers working at the same time in parallel universes – a picture derived from Deutsch’s belief in the controversial “many-worlds” interpretation of quantum mechanics, which holds that every possible measurement outcome on a superposition is realised in separate universes. Deutsch’s view isn’t the common one, but even the milder notion of parallel calculations doesn’t satisfy many researchers. It’s closer to the truth to say that entanglement lets qubits share information efficiently, so that quantum logic operations somehow “count for more” than classical ones.

Theorists tend then to speak of quantum computation as drawing on some “resource” that is not available to classical machines. The exact nature of that resource is a little vague and probably somewhat different for different types of quantum computation. As Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, told me, “If you have ‘enough’ quantum mechanics available, in some sense, then you have a speed-up, and if not, you don’t.”

Regardless of exactly how quantum speed-up works, it is generally agreed to be real – and worth seeking. All the same (although rarely acknowledged outside the field), there’s no reason to believe that it’s available for all computations. Aside from the difficulty of building these devices, one of the biggest challenges for the field is to figure out what to do with them: to understand how and when a quantum computation will better a classical one. As well as factorisation, quantum computation should be able to speed up database searches – and there’s no question how useful that would be, for example in combing through the masses of data generated in biomedical research on genomes. But there are currently precious few other concrete applications.

One of the big problems is dealing with errors. Given the difficulty of keeping qubits coherent and stable, these seem inevitable: qubits are sure to flip accidently now and again, such as a one changing to a zero or getting randomised. Dealing with errors in classical computers is straightforward: you just keep several copies of the same data, so that faulty bits show up as the odd one out. But this approach won’t work for quantum computing, because it’s a fundamental and deep property of quantum mechanics that making copies of unknown quantum states (such as the states of qubits over the course of a computation) is impossible. Developing methods for handling quantum errors has kept an army of researchers busy over the past two decades. It can be done, but a single error-resistant qubit will need to be made from many individual physical qubits, placing even more demands on the engineering.

And while it is tempting to imagine that more powerful quantum computers mean more qubits, the equation is more complex. What matters, too, is how many logical steps in a computation you can carry out on a system of qubits while they remain coherent and relatively error-free – what quantum scientists call the depth of the calculation. If each step is too slow, having more qubits won’t help you. “It’s not the number of qubits that matters, but their quality,” John Martinis, who heads Google’s quantum-computing initiative in their Santa Barbara research centre, told me.

***

One of the likely first big applications of quantum computing isn’t going to set the world of personal computing alight, but it could transform an important area of basic science. Computers operating with quantum rules were first proposed in 1982 by the American physicist Richard Feynman. He wasn’t concerned with speeding up computers, but with improving scientists’ ability to predict how atoms, molecules and materials behave using computer simulations. Atoms observe quantum rules, but classical computers can only approximate these in cumbersome ways: predicting the properties of a large drug molecule accurately, for example, requires a state-of-the-art supercomputer.

Quantum computers could hugely reduce the time and cost of these calculations. In September, researchers at IBM used the company’s prototype quantum computer to simulate a small molecule called beryllium dihydride. A classical computer could, it’s true, do that job without much trouble – but the quantum computer doing it had just six qubits. With 50 or so qubits, these devices would already be able to do things beyond the means of classical computers.

That feat – performing calculations impossible by classical means – is a holy grail of quantum computing: a demonstration of “quantum supremacy”. If you asked experts a few years ago when they expected this to happen, they’d have been likely to say in one or two decades. Earlier this year, some experts I polled had revised their forecast to within two to five years. But Martinis’s team at Google recently announced that they hope to achieve quantum supremacy by the end of this year.

That’s the kind of pace the research has suddenly gathered. Not so long ago, the question “When will I have a quantum computer?” had to be answered with: “Don’t hold your breath.” But you have one now. And I do mean you. Anyone worldwide can register online to use IBM’s five-qubit quantum computer called IBM Q Experience, housed at the company’s research lab in Yorktown Heights, New York, and accessible via a cloud-based system.

During my Zurich visit, two young researchers, Daniel Egger and Marc Ganz­horn, walked me through the software. The user configures a circuit from just five qubits – as easily as arranging notes on a musical stave – so that it embodies the algorithm (sequence of logical steps) that carries out your quantum computation. Then you place your job in a queue, and in due course the answers arrive by email.

Of course, you need to understand something about quantum computing to use this resource, just as you needed to know about classical computing to program a BBC Micro to play Pong. But kids can learn this, just as we did back then. In effect, a child can now conduct quantum experiments that several decades ago were the preserve of the most hi-tech labs in the world, and that were for Schrödinger’s generation nothing but hypothetical thought experiments. So far, more than 60,000 users have registered to conduct half a million quantum-computing experiments. Some are researchers, some are students or interested tinkerers. IBM Q has been used in classrooms – why just teach quantum theory when you can learn by quantum experiments? The British composer Alexis Kirke has used the device to create “quantum music”, including “infinite chords” of superposed notes.

IBM has just announced that it is making a 20-qubit device available to its corporate clients, too – a leap perhaps comparable to going from a BBC Micro to an Intel-powered laptop. Opening up these resources to the world is an act of enlightened self-interest. If the technology is going to flourish, says the company’s communications manager, Chris Sciacca, it needs to nurture an ecosystem: a community of people familiar with the concepts and methods, who between them will develop the algorithms, languages and ultimately plug-in apps other users will depend on.

It’s a view shared across the nascent quantum computing industry. From Zurich, I travelled to Heidelberg to chair a panel on the topic that included Martinis. It was only with the advent of personal computing in the 1970s, he told the audience, that information technology began to take off, as a generation of computer geeks started to create and share tools such as the Linux operating system – and eventually the Google search engine, Facebook and all the rest.

There’s no shortage of people eager to get involved. Sciacca says that IBM’s researchers can barely get their work done, so inundated are they with requests from businesses. Every company director wants to know, “When can I get one?” While online access is all very well, for reasons of security and prestige you can be sure that companies will want their own quantum machine, just as they have their own web and email servers. Right now, these are hardly off-the-shelf devices and the cloud-based model seems the logical way forward. But no one is likely to forget the apocryphal comment by IBM’s co-founder Thomas J Watson in the 1940s that about five computers should be enough for the entire world.

***

The commercial potential is immense, and already the giants such as Google and IBM have stiff competition from young upstarts including Rigetti Computing in California, founded in 2013 by the former IBM scientist Chad Rigetti. The company has developed the world’s first commercial fabrication facility for quantum circuitry, and it’s in a bullish mood.

“We expect commercially valuable uses of quantum computing within the next five years,” Madhav Thattai, Rigetti’s chief strategy officer, told me, adding: “In ten to 15 years, every major organisation will use this technology.” What the chances are of us all having portable personal devices that rely on liquid-helium cooling is another matter.

This commercialisation will be accelerated by piggybacking on the infrastructure of conventional computing, such as access to cloud- and web-based resources. It’s likely quantum computers won’t be needed – perhaps not even best able – to do every aspect of a computational problem, but will work in partnership with classical computers, brought in to do only what they do well.

Martinis and his colleagues at Google are testing a 22-qubit quantum computer at Google and have a 49-qubit model in development. IBM has recently announced that it has tested a 50-qubit device, too. Achieving even these numbers is a daunting task – but Martinis has already turned the Google lab into a world leader in astonishingly short order. Given the precipitous recent progress after decades of talk and theory, the challenge is to balance the extraordinary, transformative potential of the field against the risk of hype. It’s as if we’re in the 1950s, trying to predict the future of computing: no one had any idea where it would lead.

In Heidelberg, I asked Martinis how he feels about it all. Is he pinching himself and thinking, “My God, we have real quantum computers now, and we’re already simulating molecules on them”? Or is he fretting about how on Earth he and his peers will surmount the technical challenges of getting beyond today’s “toy” devices to make the promised many-qubit machines that will change the world?

I should have seen the answer coming. A flicker of a smile played across his face as he replied, “I’m in a superposition of both.” 

This article first appeared in the 07 December 2017 issue of the New Statesman, Christmas special

A 1907 painting of Spinoza, who was excommunicated from Judaism in 1656. Credit: SAMUEL HIRSZENBERG
Show Hide image

Why atheists are true believers too

How atheisms are imitating the religions they claim to reject.

In 1995 Richard Dawkins became the first ever “professor for the public understanding of science” at Oxford University. By the time he retired, 13 years later, it looked as if he had privately renegotiated his contract; for he was now functioning as Oxford’s very own professor for the public misunderstanding of religion.

In The God Delusion (2006) he argued that the existence of God was a scientific hypothesis which was almost – almost – demonstrably false. Miracles were scientifically impossible (yes, professor, I think we knew that: the clue was in the word “miracles”). And the creation story in the Book of Genesis was very bad science indeed. Opposing the stupidities of modern “creationism”, and all the other pseudo-scientific or anti-scientific dogmas of the fundamentalists, is one thing. Criticising the moral evils committed by religious fanatics is another, and no less worthwhile. Yet to treat religion itself as merely a defective form of science is a strangely crude error, rather like thinking that poetry is just a way of conveying factual statements that are to be tested for their truth or falsehood.

In his new book, Seven Types of Atheism, John Gray – who, I should mention, is no more a religious believer than I am – has little time for the so-called New Atheism of Dawkins and Co. The confusion of religion with science is only one of the points he objects to. Even if it can be shown that religion involves the creation of illusions, he argues, that does not mean that religion can or should be dispensed with; for “there is nothing in science that says illusion may not be useful, even indispensable, in life”. As for the idea of the American New Atheist Sam Harris that we can develop “a science of good and evil” which will contain all the correct liberal values: Gray sees this as a piece of astonishing and culpable naivety, ignoring nearly two centuries’ worth of evidence that scientism in ethics and illiberalism go happily hand-in-hand.

If this short book were just another intervention in the Dawkinsian “God debate”, it would be very short indeed. In fact it would get no further than page 23 where, at the end of his brief opening chapter, Gray concludes damningly that “the organised atheism of the present century is mostly a media phenomenon, and best appreciated as a type of entertainment”.

But the New Atheism is the least of the seven varieties that make up the subject-matter of this book. The others are all much more interesting, being connected with significant elements in our culture. And if the phrase “our culture” sounds parochial, well, that is an issue Gray deals with explicitly, pointing out that what we call “atheism” is something much more specific than just a rejection or absence of religion as such. It is a rejection of certain religious beliefs – and that narrows the field already, as many religions of the world are not primarily belief-systems at all. In particular, Gray argues, it is a rejection of belief in an omnipotent creator-god, which means that while atheism is Christianity’s close relative, it bears no relation to Hinduism or Buddhism at all.

So this is a book about post-Christian thinking – most of it, in Gray’s view, pretty bad thinking, too. One of his targets is secular humanism, which he describes as “a hollowed-out version of the Christian belief in salvation through history”. Another is what he calls “making a religion from science”, a delusion which he traces all the way from Mesmerism in the late 18th century, via dialectical materialism in the 19th and 20th, to those futurist thinkers today who dream of uploading a human being’s consciousness to computer circuits, thereby rendering it immortal. And another is political religion, “from Jacobinism through communism and Nazism to contemporary evangelical liberalism”.

Obviously there are overlaps between these three varieties of modern atheism; dialectical materialism, for instance, has also formed part of the creed of Marxist political religion. The one fundamental thing they have in common, on Gray’s account, is that they are all doctrines of progress, of an onwards and upwards march of humanity through history. Whether he is right to say that secular humanism is committed to this view, I am not so sure; doubtless, those who believe in humanist ethics will also think that if more and more people adopt their ethical system the world will become a better place, yet it’s not clear why they should regard that as inevitable.

But one thing at least is clear: John Gray regards all belief in human progress as the most pernicious of delusions. Despite all his eloquence on this subject, some readers may feel that his argument runs away with him, taking him further than he needs to go. It would be enough, surely, to say that the basic moral qualities of human beings have not changed over time, and that there’s no reason to think that any improvements in human behaviour that have taken place are part of a pattern of inevitable progress. Yet Gray goes further, claiming that there has been no real improvement at all.

The abolition of slavery? Slave auctions in “Islamic State” territory have been advertised on Facebook. The abandonment of torture? It has persisted at Guantanamo Bay. Well, yes; but having pockets of slavery here and there in the world is not the same as the situation 200 years ago, when it was a huge and entrenched institution, questioned only by a small minority. Yes, torture continues, but not as a standard judicial procedure. And in many countries there have been substantial, long-term changes in attitude and treatment where female subjugation, child labour and the criminalisation of homosexuality are concerned. Surely there must be some way of acknowledging this, without relapsing into Pollyannaish Steven Pinkerism?

One reason for Gray’s emphasis on the theme of temporal progress is that it fits these various secular atheisms into a larger pattern – that of salvation through history. And this brings us to the core of his argument: out of the whole range of major religions, only Christianity works in a historical dimension like this, which means that the secular atheisms are imitating, or unconsciously reproducing, a central feature of the very religion they claim to reject.

He makes this point again and again. These modern atheists’ view of the world is “inherited” from Christianity. Their belief in progress is “a secular avatar of a religious idea of redemption”. Jacobinism and Bolshevism were “channels” for the millenarian myths of Christianity. Bolshevism was in a “lineage” going back to medieval millenarianism. The apocalyptic myths of radical Christian movements “renewed themselves” in secular, political forms.

Having watched Gray wield his scalpel so effectively on other writers’ arguments, I can’t help thinking that this one deserves a few incisions. What does it mean to say that a communist who yearns for the coming of the classless society is really expressing just the same view as a millenarian looking to the reign of Christ on earth? The form of the belief may be roughly similar, but the content is entirely different. And if these are “inherited” ideas standing in a “lineage”, what is the evidence of a continuous chain of transmission – from, say, the 16th-century radical Anabaptists of Münster (whose chaotic quasi-communist experiment Gray describes in graphic detail) to the Bolsheviks of Petrograd and Moscow? As for the religious myths “renewing themselves” in a secular guise: this seems perilously close to the mindset of Dawkins’s theory of “memes”, which Gray has scornfully dismissed as hardly a theory at all.

Gray also mentions a Gnostic “impulse” that has recurred, unchanged, over two millennia. But if the same impulse can produce a religious idea in one period and a secular one in another, it seems that the impulse is something that stands behind both, itself neither secular nor religious. In which case, the modern atheisms may be not so much reproducing religious beliefs as expressing some basic yearnings that are pre-religious or non-religious in themselves. These are dark theoretical waters, and I am not convinced that Gray has got to the bottom of them.

Yet what he has done is to produce a marvellously stimulating account of some major currents of post-Christian thought, in which ideas and arguments leap constantly off the page like white-hot sparks from an anvil. The dismissals are concise and often devastating; but there are also wonderfully funny details, lovingly accumulated by a wry observer of human foolishness. It is nice to learn, for example, that Auguste Comte’s secular religion of Positivism imposed on its followers “special types of clothing, with buttons placed on the back so that they could not be worn without the help of others – thereby promoting altruism”. And I would challenge anyone to read Gray’s account of the cult of Ayn Rand, with its compulsory cigarette-smoking and rational tap-dancing, and not laugh out loud.

But what of Gray’s own post-religious beliefs? He certainly does not belong in the fifth category discussed here, that of “misotheists” – the Marquis de Sade, Dostoevsky and William Empson – whose views were shaped by a positive hatred of God. (Here, at least, he has no difficulty in showing that some kinds of atheism are dependent intimately and inseparably on Christian theology.) Gray’s own sympathies are divided between his two final varieties: the naturalistic, undogmatic and guaranteed progress-free atheism of the philosopher George Santayana; and the philosophico-theological theories of Spinoza and Schopenhauer, which argued obscurely both that a greater reality, possibly to be identified as Spirit or God, existed, and that to talk about it as a god who created the world, or intervened in it, or issued commands to humans, was to misunderstand it entirely.

Santayana was himself an admirer of Spinoza, and towards the end of the book, Gray quotes his characterisation of the Dutch-Jewish philosopher as follows: “By overcoming all human weaknesses, even when they seem kindly or noble, and by honouring power and truth, even if they should slay him, he entered the sanctuary of an unruffled superhuman wisdom.” I am not sure that this is quite the image that readers should take away of Gray, whose tolerance of human weaknesses – at the personal level, if not the intellectual one – seems admirably generous. Nor can it be guaranteed that people will acquire unruffled superhuman wisdom by reading this book. More likely they will find themselves tremendously, even painfully, ruffled. And I mean that as high praise, for an author who is one of the greatest intellectual provocateurs of our time. 

Noel Malcolm is editor of the Clarendon Edition of the Works of Thomas Hobbes and a fellow of All Souls, Oxford

John Gray will appear in conversation with Jason Cowley at Waterstones Trafalgar Square, London WC2, on 2 May (newstatesman.com/events)

Seven Types of Atheism
John Gray
Allen Lane, 176pp, £17.99

This article first appeared in the 07 December 2017 issue of the New Statesman, Christmas special