The Problem with Quantum Computers (and What Google’s Doing About It)
Picture yourself in a supermarket. On your shopping trip, you walk along aisles to search for items, taking the ones you desire off shelves, and placing them in your basket. Once you have everything you need, you head for the checkout, where, ordinarily, somebody scans all your items’ barcodes. The supermarket, after all, has no idea what you took as you went round, but needs to know so they can charge you. Of course, you wouldn’t mind walking out without paying; you love free food.
Unfortunately, this supermarket is experiencing a technical problem. Every time a barcode is scanned, the item in question spontaneously combusts, leaving you annoyed and the cashier baffled. This is clearly a problem — the supermarket has no way to preserve your shopping trip while also getting a receipt for it. Without manually rummaging through your bags, they’re stuck for a solution.
This is the problem of error correction in quantum computing. Physicists and computer scientists are unable to directly gather information from a quantum computation without disturbing the data, just in the same way that the supermarket was unable to scan barcodes without destroying the means of charging you. However, unlike the supermarket, quantum computing has researchers working to ensure data measurement can be sustainable.
Quantum computing leverages two concepts from physics: superposition and entanglement. In a classical computer (what you’re reading this on), the information it uses is represented as a series of singular 1s and 0s. In a quantum computer, the properties of quantum particles (like photons or electrons) that represent information exist in a state of superposition; that is, the state of being a 0, a 1, or anything in between, simultaneously. And, because these quantum bits (qubits) are so fragile, any observation or influence from the environment can cause their state to collapse into a 0 or a 1, destroying the data they hold.
Scientists at Google recently published a paper in which the second concept, entanglement, is used to solve this problem. Entanglement refers to a relationship between any two quantum particles, that Einstein once called “spooky action at a distance”, where anything that happens to the state of one quantum particle also happens to an entangled twin, no matter where in the universe they are.
With their quantum computer, Google physicists divided qubits into two types: ones to hold data (data qubits) and ones entangled with data qubits to signal errors (measure qubits). These were then grouped into ‘logical qubits’, that encode a single superposition, such that an error detected from a measure qubit might not affect the integrity of the logical qubit’s information. Two different methods for organising qubits, called codes, were used: one where data and measure qubits were alternated in a one-dimensional, linear order; and another where they were spread in a two-dimensional lattice to pick up more error types.
For the linear code, they discovered that the number of errors decreased exponentially as more qubits were considered along the one-dimensional chain; the first finding of its kind in the literature. From 5 to 21 qubits, logical errors decreased by a factor of 100, with issues arising at a probability of 6.7
10–5 with 21 qubits. However, there was an issue with this code’s layout: qubits that were distant one-dimensionally but close together as the chain snaked around the two-dimensional surface sometimes entangled, damaging results.
For the lattice code, errors crept up more frequently, with 27% of trials being removed due to error detection. That said, this code held more promise for future research: the fact it can detect more types of errors makes it readier for expansion into more complex quantum computers.
Directions for future research are clear: add more qubits, do more calculations, and correct more errors. As with everything in quantum computing, however, this is easier said than done. The more qubits present, the more the environment may disturb computations. The more calculations done, the more the chance an observation will affect a qubit’s state. The more errors corrected, the longer a calculation takes. Still, with these landmark findings, Google have taken the field one step closer to more practicable quantum computers (or, more reliable supermarket technology).