The shift in quantum hardware roadmaps is good news, as it could pave the way for a given hardware platform to implement ten times as many qubits than previously thought.
Quantum computing has moved closer to reality thanks to recent breakthroughs in quantum error correction which have reduced the resource overhead required to maintain qubit integrity. This means that the timeline for practical, fault-tolerant quantum computing (FTQC) has shifted forward, supporting the Kvantify strategy of building quantum-ready solutions today.
Quantum error correction is the dirty secret of quantum computing
Quantum error correction is the dirty secret of quantum computing: Unlike classical systems where error correction is rarely needed outside storage and communication, quantum hardware is inherently imperfect, and fault tolerant quantum computers will require error correction in everything they do.
The fact that it is possible to simulate ideal (or "logical") qubits and gate operations on top of imperfect physical hardware (as long as the physical hardware is better than a code-dependent threshold), is perhaps counterintuitive and one of the cornerstones of the theoretical foundation for quantum computing, as we also discuss in our blog post "Errare Quantum Est".
The price for this emergent perfection is an overhead in system size in that each logical qubit requires many physical qubits, as well as in operating speed with each operation on logical qubits requiring many gate operations between physical qubits.
The size of the error correction overhead varies with the type of code and the capabilities and performance of the underlying physical system. Early theoretical work focused concatenated codes, which required the physical qubits to be highly connected, a feature which can be hard to achieve in practice.
As an alternative, much commercial work on quantum computing hardware has focused on surface-codes: although these have large overheads of up to a factor of thousand (meaning that 1.000 physical qubits are needed for each logical qubit), they only require the physical qubits to be connected to their nearest neighbours in a rectangular grid which is feasible for most technologies.
Over the last year or so, a new type of codes has gained attention. These quantum low-density parity-check (qLDPC) codes promise to reduce the overhead by a factor of ten or more compared to surface codes, at the price of requiring a small amount of long-range connectivity.
If current hardware roadmaps can be tweaked to accommodate such added connectivity requirements, which looks to be possible, this means that qLDPC-codes would allow a given hardware platform to implement ten times as many qubits than previously thought.
While qLDPC codes are particularly promising for quantum memory, implementing gate operations on the stored data remains a challenge. This could potentially require fallback techniques such as magic state distillation and gate teleportation, which introduce their own overhead. Despite these hurdles, the advantages of qLDPC codes make them a crucial step forward.
In conclusion, qLDPC codes have the potential to reduce the quantum error correction overhead by up to a factor of ten for most technologies, potentially shifting the roadmap for fault tolerant quantum computing forward by several years.
This is good news for us at Kvantify: Part of our mission is to identify real-world value in quantum application, and the sooner we see utility-scale quantum computing, the sooner we can demonstrate this value.