One of the biggest stumbling blocks for quantum computers is their tendency to be error-prone and the massive computational overhead required to clean up their mistakes. IBM has now made a breakthrough by dramatically reducing the number of qubits required to do so.
All computers are prone to errors, and even the computer chip in your laptop runs code designed to fix things when a bit flips accidentally. But fragile quantum states are far more vulnerable to things like environmental noise, which means correcting errors in quantum processors will require considerable resources.
Most estimates predict that creating just a single fault-tolerant qubit, or logical qubit, that can carry out useful operations will require thousands of physical qubits dedicated to error correction. Given that today’s biggest processors have just hundreds of qubits, this suggests we’re still a long way from building practical quantum computers that can solve real problems.
But now researchers at IBM say they’ve discovered a new approach that slashes the number of qubits required for error correction by a factor of 10. While the approach currently only works on quantum memory rather than computation, the technique could open the door to efficient new approaches to creating fault-tolerant devices.
“Practical error correction is far from a solved problem,” the researchers write in a blog post. “However, these new codes and other advances across the field are increasing our confidence that fault tolerant quantum computing isn’t just possible, but is possible without having to build an unreasonably large quantum computer.”
The leading approach to error correction today is known as the surface code, which involves arranging qubits in a specially configured 2D lattice and using some to encode data and others to make measurements to see if an error has occurred. The approach is effective, but it requires a large number of physical qubits to pull off—as many as 20 million for some key problems of interest, according to IBM.
The new technique, outlined in a preprint on arXiv, comes from the same family of error-correction approaches as the surface code. But while each qubit in the surface code is connected to four others, the new technique connects them to six others, which makes it possible to encode more information into the same number of physical qubits.
As a result, the researchers say they can reduce the number of qubits required by an order of magnitude. Creating 12 logical qubits using their approach would require only 288 physical qubits, compared to more than 4,000 when using the surface code.
There are some significant caveats, though. For a start, it’s currently impossible to achieve the kind of six-way connectivity the team envisages. While the surface code operates on a single plane and can therefore be easily implemented on the kind of flat chip already found in quantum processors, the new approach requires connections to distant qubits that aren’t located on the same surface.
The researchers say this isn’t an insurmountable barrier, and IBM is already developing the kind of long-range couplers required to make these kinds of corrections. The technologies needed are certainly plausible, Jérémie Guillaud at French quantum computing startup Alice & Bob told New Scientist, and could be here in just a matter of years.
A bigger open question, though, is the fact that so far the approach only works with a small number of logical operations. This means that while it works for reading and writing to a quantum memory in a fault-tolerant way, it wouldn’t support most quantum computations.
But the IBM researchers say the techniques they’ve unveiled are just a stepping stone that points toward a rich new vein of even better error-correction approaches. If they’re right and scientists are able to find more efficient alternatives to the surface code, it could significantly accelerate the advent of practical quantum computing.
Image Credit: IBM