New Times,
New Thinking.

  1. Long reads
19 December 2017

How quantum computing will change the world

We are on the cusp of a new era of computing, with Google, IBM and other tech companies using a theory launched by Einstein to build machines capable of solving seemingly impossible tasks.

By Philip Ball

In 1972, at the age of ten, I spent a week somewhere near Windsor – it’s hazy now – learning how to program a computer. This involved writing out instructions by hand and sending the pages to unseen technicians who converted them into stacks of cards punched with holes. The cards were fed overnight into a device that we were only once taken to see. It filled a room; magnetic tape spooled behind glass panels in big, grey, wardrobe-sized boxes. The next morning, we’d receive a printout of the results and the day would be spent finding the programming faults that had derailed our calculations of pi to the nth decimal place.

There was awed talk of computer experts who worked at an even rawer level of abstraction, compiling programs (no one called it coding then) in the opaque, hieroglyphic notation of “machine code”. Those were the days when you had to work close to the guts of the machine: you thought in terms of central processing units, circuit diagrams, binary logic. If you wanted to play games, you had to write them yourself – by the 1980s, on a BBC Micro or Sinclair ZX Spectrum with less graphical sophistication than an ATM.

I was reminded of those clunky, makeshift early days of public access to computers when, in September, I saw one of IBM’s quantum computers at the company’s research labs in Rüschlikon, a suburb of Zurich. On a hill overlooking Lake Zurich, in the early autumn sunshine, the labs have a laid-back air that is more Californian than Swiss. In the past several decades, they have been the incubator of Nobel Prize-winning scientific innovations. Things grow here that affect the world.

This computer has the improvised appearance of a work in progress. It’s a sturdy metal cylinder the size and shape of a domestic water-heater immersion tank, suspended on a frame of aluminium beams reaching to the ceiling and brought to life by a dense tangle of wires that lead to a bank of off-the-shelf microwave oscillators. The “brain” – the component in which binary ones and zeros of data are crunched from input to output – sits deep inside this leviathan, on a microchip the size of a baby’s fingernail.

The last time I visited IBM’s Zurich centre, in 2012, its head of science and technology, Walter Riess, talked about the company’s plans for an imminent “post-silicon” era, after the silicon-chip technology of today’s computers had reached the physical limits of its ability to offer more computing power. Back then, quantum computing seemed like a far-off and speculative option for meeting that challenge.

Now it’s real. This is what computing felt like in the 1950s, Riess told me in September as he introduced me to the new device. It has become routine to juxtapose images of these room-filling quantum machines with the first prototype digital computers, such as the valve-driven ENIAC (or “Electronic Numerical Integrator and Computer”) at the University of Pennsylvania, used for ballistics calculations by the US military. If this is where quantum computing is now, such pictures imply, just try to imagine what’s coming.

Quantum computing certainly sounds like the future. It’s the technology of choice for sci-fi film-makers who want their artificial intelligence networks to have unlimited potential. But what is it really about, and what might it do?

Give a gift subscription to the New Statesman this Christmas from just £49

***

This “quantum information technology” is often presented as more of the same, but better. We have become so accustomed to advances in computing being reflected in slimmer, faster laptops and bigger memories that quantum computing is often envisaged in the same terms. It shouldn’t be.

It represents the first major shift in how computing is done since electronic computing devices were invented in the vacuum-tube-powered, steam-punk 1940s. Digital computers manipulate information encoded in binary form as sequences of ones and zeros; for example, as pulses of electrical current. The circuits contain “logic gates”, which produce binary outputs that depend in well-defined ways on the inputs: a NOT gate, say, simply inverts the input, converting a one to a zero and vice versa.

All the rest is software, whether that involves converting keystrokes or mouse movements into images, or taking numbers and feeding them into an equation to work out the answer. It doesn’t much matter what the hardware is – transistors replaced vacuum tubes in the 1950s and have since been shrunk to far smaller than the size of bacteria – so long as it can perform these transformations on binary data.

Quantum computers are no different, except in one crucial respect. In a conventional (“classical”) computer, one bit of binary data can have one of just two values: one or zero. Think of it as transistors acting like the light switches in your house: they are set to either on or off. But in a quantum computer, these switches, called quantum bits or qubits (pronounced “cue-bits”), have more options, because they are governed by the laws of quantum theory.

This notoriously recondite theory started to take shape at the beginning of the 20th century through the work of Max Planck and Albert Einstein. In the 1920s, scientists such as Werner Heisenberg and Erwin Schrödinger devised mathematical tools to describe the kinds of phenomena that Einstein and Planck had revealed. Quantum mechanics seemed to say that, at the minuscule level of atoms, the world behaves very differently from the classical mechanics that had been used for centuries to describe how objects exist and move. Atoms and subatomic particles, previously envisaged as tiny grains of matter, seemed sometimes to show behaviour associated instead with waves – which are not concentrated at a single point in space but are spread throughout it (think of sound waves, for example).

What’s more, the properties of such quantum objects did not seem to be confined to single, fixed values, in the way that a tossed coin has to be either heads or tails or a glove has to be left- or right-handed. It was as if they could be a mixture of both states at once, called a superposition.

That “as if” is crucial. No one can say what the “true nature” of quantum objects is – for the simple yet perplexing reason that quantum theory doesn’t tell us. It just provides a means of predicting what a measurement we make will reveal. A quantum coin can be placed in a superposition of heads and tails: once it is tossed, either state remains a possible outcome when we look. It’s not that we don’t know which it is until we look; rather, the outcome isn’t fixed, even with the quantum coin lying flat on the table covered by our palm, until we look. Is the coin “both heads and tails” before we look? You could say it that way, but quantum mechanics declines to say anything about it.

Thanks to superposition, qubits can, in effect, encode one and zero at the same time. As a result, quantum computers can represent many more possible states of binary ones and zeros. How many more? A classical bit can represent two states: zero and one. Add a bit (an extra transistor, say) to your computer’s processor and you can encode one more piece of binary information. Yet if a group of qubits are placed in a joint superposition, called an entangled state, each additional qubit doubles the encoding capacity. By the time you get to 300 qubits – as opposed to the billions of classical bits in the dense ranks of transistors in your laptop’s microprocessors – you have 2^300 options. That’s more than the number of atoms in the known universe.

You can only access this huge range of options, though, if all the qubits are mutually dependent: in a collective or “coherent” state, which, crudely speaking, means that if we do something to one of them (say, flip a one to a zero), all the others “feel” it. Generally, this requires all the qubits to be placed and maintained in an entangled state.

ENIAC, one of the world’s first digital computers, at the University of Pennsylvania

The difficulty of making a quantum computer mostly involves making and sustaining these coherent states of many qubits. Quantum effects such as superposition and entanglement are delicate and easily disrupted. The jangling atomic motions caused by heat can wash them away. So, to be coherently entangled, qubits must be cooled to extremely low temperatures – we’re typically talking less than a degree above absolute zero (-273° C) – and kept well isolated from the laboratory environment: that is, from the very equipment used to manipulate and measure them. That’s partly why the IBM quantum computer I saw is so bulky: much of it consists of cooling equipment and insulation from the lab environment.

Because of the fragility of entanglement, it has so far been possible only to create quantum computers with a handful of qubits. With more than a few dozen, keeping them all entangled stretches current quantum technologies to the limit. Even then, the qubits remain in a coherent state for just a fraction of a second. You have only that long to carry out your entire quantum computation, because once the coherence starts to decay, errors infect and derail the calculation.

There’s a consensus that silicon transistors are the best bits for conventional computers, but there’s no such agreement about what to make qubits from. The prototype devices that exist so far outside academic laboratories – at Google, IBM and the Canadian company D-Wave, for example – use miniaturised electronic circuits based on superconducting devices. Here, the quantum behaviour stems from an exotic effect called superconductivity, in which metals at very low temperatures conduct electricity without any resistance. The advantage of such qubits is that the well-developed micro­fabrication technologies used for making silicon circuitry can be adapted to make them on silicon chips, and the qubits can be designed to order.

But other researchers are placing their money on encoding data into the quantum energy states of individual atoms or ions, trapped in an orderly array (such as a simple row) using electric or magnetic fields. One advantage of atomic qubits is that they are all identical. Information can be written into, read out from and manipulated within them using laser beams and microwaves. One leading team, headed by Chris Monroe of the University of Maryland, has created a start-up company called IonQ to get the technology ready for the market.

***

Quantum computers have largely been advertised on the promise that they will be vastly faster at crunching through calculations than even the most powerful of today’s supercomputers. This speed-up – immensely attractive to scientists and analysts solving complex equations or handling massive data sets – was made explicit in 1994 when the American mathematician Peter Shor showed in theory that a computer juggling coherent qubits would be able to factor large numbers much more efficiently than classical computers. Reducing numbers to their simplest factors – decomposing 12 to “two times two times three”, for example – is an exercise in elementary arithmetic, yet it becomes extremely hard for large numbers because there’s no shortcut to trying out all the possible factors in turn. Factorising a 300-digit number would take current supercomputers hundreds of thousands of years, working flat out.

For this reason, a lot of data encryption – such as when your credit card details are sent to verify an online purchase – uses codes based on factors of large numbers, which no known computer can crack. Yet Shor showed that a quantum factorisation algorithm could find factors much more efficiently than a classical one can.

How it does so is hard to say. The usual explanation is that, because of all those entangled qubits, a quantum computer can carry out many calculations in parallel that a classical computer would do only one after the other. The Oxford physicist David Deutsch, who pioneered the theory of quantum computing in the 1980s, argues that the quantum computer must be regarded as lots of classical computers working at the same time in parallel universes – a picture derived from Deutsch’s belief in the controversial “many-worlds” interpretation of quantum mechanics, which holds that every possible measurement outcome on a superposition is realised in separate universes. Deutsch’s view isn’t the common one, but even the milder notion of parallel calculations doesn’t satisfy many researchers. It’s closer to the truth to say that entanglement lets qubits share information efficiently, so that quantum logic operations somehow “count for more” than classical ones.

Theorists tend then to speak of quantum computation as drawing on some “resource” that is not available to classical machines. The exact nature of that resource is a little vague and probably somewhat different for different types of quantum computation. As Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, told me, “If you have ‘enough’ quantum mechanics available, in some sense, then you have a speed-up, and if not, you don’t.”

Regardless of exactly how quantum speed-up works, it is generally agreed to be real – and worth seeking. All the same (although rarely acknowledged outside the field), there’s no reason to believe that it’s available for all computations. Aside from the difficulty of building these devices, one of the biggest challenges for the field is to figure out what to do with them: to understand how and when a quantum computation will better a classical one. As well as factorisation, quantum computation should be able to speed up database searches – and there’s no question how useful that would be, for example in combing through the masses of data generated in biomedical research on genomes. But there are currently precious few other concrete applications.

One of the big problems is dealing with errors. Given the difficulty of keeping qubits coherent and stable, these seem inevitable: qubits are sure to flip accidently now and again, such as a one changing to a zero or getting randomised. Dealing with errors in classical computers is straightforward: you just keep several copies of the same data, so that faulty bits show up as the odd one out. But this approach won’t work for quantum computing, because it’s a fundamental and deep property of quantum mechanics that making copies of unknown quantum states (such as the states of qubits over the course of a computation) is impossible. Developing methods for handling quantum errors has kept an army of researchers busy over the past two decades. It can be done, but a single error-resistant qubit will need to be made from many individual physical qubits, placing even more demands on the engineering.

And while it is tempting to imagine that more powerful quantum computers mean more qubits, the equation is more complex. What matters, too, is how many logical steps in a computation you can carry out on a system of qubits while they remain coherent and relatively error-free – what quantum scientists call the depth of the calculation. If each step is too slow, having more qubits won’t help you. “It’s not the number of qubits that matters, but their quality,” John Martinis, who heads Google’s quantum-computing initiative in their Santa Barbara research centre, told me.

***

One of the likely first big applications of quantum computing isn’t going to set the world of personal computing alight, but it could transform an important area of basic science. Computers operating with quantum rules were first proposed in 1982 by the American physicist Richard Feynman. He wasn’t concerned with speeding up computers, but with improving scientists’ ability to predict how atoms, molecules and materials behave using computer simulations. Atoms observe quantum rules, but classical computers can only approximate these in cumbersome ways: predicting the properties of a large drug molecule accurately, for example, requires a state-of-the-art supercomputer.

Quantum computers could hugely reduce the time and cost of these calculations. In September, researchers at IBM used the company’s prototype quantum computer to simulate a small molecule called beryllium dihydride. A classical computer could, it’s true, do that job without much trouble – but the quantum computer doing it had just six qubits. With 50 or so qubits, these devices would already be able to do things beyond the means of classical computers.

That feat – performing calculations impossible by classical means – is a holy grail of quantum computing: a demonstration of “quantum supremacy”. If you asked experts a few years ago when they expected this to happen, they’d have been likely to say in one or two decades. Earlier this year, some experts I polled had revised their forecast to within two to five years. But Martinis’s team at Google recently announced that they hope to achieve quantum supremacy by the end of this year.

That’s the kind of pace the research has suddenly gathered. Not so long ago, the question “When will I have a quantum computer?” had to be answered with: “Don’t hold your breath.” But you have one now. And I do mean you. Anyone worldwide can register online to use IBM’s five-qubit quantum computer called IBM Q Experience, housed at the company’s research lab in Yorktown Heights, New York, and accessible via a cloud-based system.

During my Zurich visit, two young researchers, Daniel Egger and Marc Ganz­horn, walked me through the software. The user configures a circuit from just five qubits – as easily as arranging notes on a musical stave – so that it embodies the algorithm (sequence of logical steps) that carries out your quantum computation. Then you place your job in a queue, and in due course the answers arrive by email.

Of course, you need to understand something about quantum computing to use this resource, just as you needed to know about classical computing to program a BBC Micro to play Pong. But kids can learn this, just as we did back then. In effect, a child can now conduct quantum experiments that several decades ago were the preserve of the most hi-tech labs in the world, and that were for Schrödinger’s generation nothing but hypothetical thought experiments. So far, more than 60,000 users have registered to conduct half a million quantum-computing experiments. Some are researchers, some are students or interested tinkerers. IBM Q has been used in classrooms – why just teach quantum theory when you can learn by quantum experiments? The British composer Alexis Kirke has used the device to create “quantum music”, including “infinite chords” of superposed notes.

IBM has just announced that it is making a 20-qubit device available to its corporate clients, too – a leap perhaps comparable to going from a BBC Micro to an Intel-powered laptop. Opening up these resources to the world is an act of enlightened self-interest. If the technology is going to flourish, says the company’s communications manager, Chris Sciacca, it needs to nurture an ecosystem: a community of people familiar with the concepts and methods, who between them will develop the algorithms, languages and ultimately plug-in apps other users will depend on.

It’s a view shared across the nascent quantum computing industry. From Zurich, I travelled to Heidelberg to chair a panel on the topic that included Martinis. It was only with the advent of personal computing in the 1970s, he told the audience, that information technology began to take off, as a generation of computer geeks started to create and share tools such as the Linux operating system – and eventually the Google search engine, Facebook and all the rest.

There’s no shortage of people eager to get involved. Sciacca says that IBM’s researchers can barely get their work done, so inundated are they with requests from businesses. Every company director wants to know, “When can I get one?” While online access is all very well, for reasons of security and prestige you can be sure that companies will want their own quantum machine, just as they have their own web and email servers. Right now, these are hardly off-the-shelf devices and the cloud-based model seems the logical way forward. But no one is likely to forget the apocryphal comment by IBM’s co-founder Thomas J Watson in the 1940s that about five computers should be enough for the entire world.

***

The commercial potential is immense, and already the giants such as Google and IBM have stiff competition from young upstarts including Rigetti Computing in California, founded in 2013 by the former IBM scientist Chad Rigetti. The company has developed the world’s first commercial fabrication facility for quantum circuitry, and it’s in a bullish mood.

“We expect commercially valuable uses of quantum computing within the next five years,” Madhav Thattai, Rigetti’s chief strategy officer, told me, adding: “In ten to 15 years, every major organisation will use this technology.” What the chances are of us all having portable personal devices that rely on liquid-helium cooling is another matter.

This commercialisation will be accelerated by piggybacking on the infrastructure of conventional computing, such as access to cloud- and web-based resources. It’s likely quantum computers won’t be needed – perhaps not even best able – to do every aspect of a computational problem, but will work in partnership with classical computers, brought in to do only what they do well.

Martinis and his colleagues at Google are testing a 22-qubit quantum computer at Google and have a 49-qubit model in development. IBM has recently announced that it has tested a 50-qubit device, too. Achieving even these numbers is a daunting task – but Martinis has already turned the Google lab into a world leader in astonishingly short order. Given the precipitous recent progress after decades of talk and theory, the challenge is to balance the extraordinary, transformative potential of the field against the risk of hype. It’s as if we’re in the 1950s, trying to predict the future of computing: no one had any idea where it would lead.

In Heidelberg, I asked Martinis how he feels about it all. Is he pinching himself and thinking, “My God, we have real quantum computers now, and we’re already simulating molecules on them”? Or is he fretting about how on Earth he and his peers will surmount the technical challenges of getting beyond today’s “toy” devices to make the promised many-qubit machines that will change the world?

I should have seen the answer coming. A flicker of a smile played across his face as he replied, “I’m in a superposition of both.” 

Content from our partners
How the UK can lead the transition to net zero
We can eliminate cervical cancer
Leveraging Search AI to build a resilient future is mission-critical for the public sector

This article appears in the 08 Dec 2020 issue of the New Statesman, Christmas special