In a recent article entitled “Myths around quantum computation before full fault tolerance: What no-go theorems rule out and what they don’t” (https://arxiv.org/abs/2501.05694) by a broad group of experts from the quantum-computing and quantum-algorithms community, led by Algorithmiq, the authors provide an interesting and enlightening perspective on the current role and future perspectives for a number of techniques commonly conceived as ‘near term’. The article has spurred good discussions and considerations internally here at Kvantify, and we have decided to wrap up those in this blog post.
A pervading theme of the article is that of noise, errors, and how to handle them/live with them. And for good reasons – noise is really the Achilles heel of present-state quantum computers, as also hard-coded into the epitomic NISQ brand. Out of the six myths considered in the article, four are directly related to noise and errors. While most people in the field by now agree that large-scale practical quantum advantage is something reserved for the era of fault-tolerant quantum computing, this should not mislead us as algorithm and software developers to just sit on our hands and passively watch as hardware vendors tick off milestones and eventually put 1,000s of logical qubits in our hands.
As the authors put it, “deconstructing unrealistic expectations should not lead to a dismissal of real opportunities”. Well said!
Having invested significant efforts into developing and deploying the algorithm FAST-VQE that overcomes some of the main shortcomings of so-called vanilla VQE (Variational Quantum Eigensolver), it should come as no surprise that we generally agree with the authors that variational algorithms are worthwhile. And this goes beyond their potential applicability in the fault-tolerant era. Using variational quantum algorithms on currently noisy hardware can be utilized to perform concrete and practically relevant tasks and thereby explore the interface between quantum and classical calculations. Since quantum algorithms will never be stand-alone, we deem this effort well spent to drive quantum algorithms towards value-creating applications.
Even in the subspace of quantum algorithms in chemistry and drug discovery, there are many ways in which quantum and classical algorithms may be interfaced. First, the classical-computing framework provides data required to set up the problem on the quantum computer – more specifically, the integrals used in the Hamiltonian are computed classically. Both for Quantum Phase Estimation on fault-tolerant devices and for variational algorithms, classical integral evaluation provides the foundation. Second, the quantum algorithm is prepared on a classical machine by composing elementary quantum operations into a quantum circuit. Since this can be done in many ways, the resulting circuit may vary in complexity. In the NISQ era, deep circuits result in large errors, and hence useful quantum error-mitigation techniques have been developed. It would, however, be even better to be able to reduce the circuit size as much as possible for the errors not to occur in the first place. This is something we have put a lot of work into, and as reported in a recent paper it has actually been possible to demonstrate optimal CNOT synthesis and provide a reduction of up to 56% in CNOT count and 46% in circuit depth, evaluated against standard T-gate-optimized benchmarks (https://arxiv.org/abs/2408.04349). Circuit optimization is essential for cutting through the noise of current hardware, and it will remain an essential tool for pushing practical applications in the fault-tolerant era. And there is no overhead to pay.
We have recently demonstrated how we combine classical and quantum resources to obtain useful results on current-day quantum devices. To do that, we have worked heavily on the two paradigms outlined above – i.e. the classical-quantum embedding algorithm and quantum-circuit optimization techniques, respectively. This combination has allowed us to simulate the rate-determining step of the carbonic anhydrase enzymatic reaction using classical and quantum resources with chemical accuracy along the entire reaction coordinate. This work was described in one of our previous posts (https://www.kvantify.com/inspiration/this-worlds-first-is-a-huge-thing)and in full detail in the scientific article “Calculating the energy profile of an enzymatic reaction on a quantum computer” (https://arxiv.org/pdf/2408.11091).
It is commendable that the authors, towards the end of their myth-busting quest, also maintain a cautiously optimistic standpoint on future quantum speedups for end-to-end applications. It has already been shown several times that when going into the detailed resource estimates, there is much more to demonstrating a practical quantum advantage than the bare advantageous scaling of a given quantum algorithm. However, while it is certainly correct that, as of now, we do not have a proven exponential speedup for a practically relevant and commercially valuable end-to-end application, there is ample room for informed optimism. We thus share the authors’ point of view that “it is reasonable to expect that quantum computers will eventually deliver meaningful speedups in practical problems”. We believe this will happen first in chemistry and life-science applications, and our mission is to pioneer the necessary developments.