There was a time when there were no quantum computers. Do you remember? I remember. But the world has changed.
Today we are well into the NISQ era — we live in a world with quantum computers with many noisy qubits built from ions, photons, atoms, and Josephson junctions. The best devices available are at a stage of maturity where they are extremely difficult to simulate directly using classical methods, and scientists around the world have leveraged NISQ devices to perform beautiful and important experiments.
There have also been advances in quantum algorithms, where many innovations occurred by embracing the restrictions of NISQ devices. For example, I think variational algorithms are popular in part because they answer the question “what can you do with a few noisy qubits?” Similarly, error mitigation came as a surprise to me; it simply had not crossed my mind that noise could be addressed with anything other than full error correction.
But despite all these advances, everything indicates that noise is too problematic and that classical methods are too powerful for NISQ devices to be transformative for industrial applications. So what’s next?
The ISQ era
Rather than scaling to more noisy qubits, at present most research groups are working to develop the first generation of fault-tolerant quantum computers. A key milestone is the demonstration of a fully programmable logical qubit with extremely low error rates. That achievement would enable the emergence of devices with few error-corrected logical qubits. I like to call this the ISQ era.
ISQ is a regime that sits between NISQ and full fault tolerance: ISQ devices are still “intermediate-scale”, but noise is less of a problem because we have unlocked the benefits of error correction, allowing us to implement quantum algorithms with much longer circuit depths. In all likelihood, the next chapter for quantum computing will be to move from NISQ to ISQ.
(The terminology used in the literature is early fault-tolerant, but I don’t think that has the same ring to it.)
This post is an invitation to build tools that are tailored for the ISQ era. Pioneers have already started this effort, but this is only the beginning. I believe there is an important opportunity to build the theoretical methods that will define the future of quantum computing.
What is ISQ?
When developing algorithms for fault-tolerant quantum computers, we basically assume we can do anything we want and design quantum algorithms accordingly. For example, at Xanadu we have developed very sophisticated quantum algorithms for simulating battery materials that require thousands of qubits and trillions of gates. Such efforts are crucial to understand the ultimate capabilities of quantum computers.
On the other hand, when exploring algorithms for NISQ devices, we usually assume access to a limited number of qubits and have to carefully restrict circuit depth since operations are noisy. In contrast, the defining features of the ISQ era are:
- Limited number of qubits — we still cannot afford methods that require too many qubits.
- Longer circuit depths — orders-of-magnitude larger than for NISQ devices, but still ultimately limited.
As expressed by the brilliant Yu Tong during our conversations, the motivation for embracing the ISQ era is that “rather than waiting for error correction to be extremely good, we want to optimize quantum algorithms so they can run with lower error-correction requirements.” In practice, this means performing a careful balancing act to achieve objectives such as reducing the number of non-Clifford operations, while respecting the constraints of ISQ devices.
The algorithms suitable for the ISQ era are different than for NISQ (since we have access to more powerful devices) and different from full fault tolerance (since they still have restrictions on qubit number and circuit depth). As a researcher, I recognize this as fertile ground for innovation.
My favourite ISQ algorithm
Rather than reviewing existing papers, I want to focus on a high-level description of my favourite ISQ algorithm. Hopefully this will give you a sense of what makes ISQ algorithms unique. I will focus on the paper Heisenberg-limited ground-state energy estimation for early fault-tolerant quantum computers.
The goal is to estimate eigenvalues of an input Hamiltonian, especially the ground-state energy. This is the type of problem we want to solve in quantum chemistry. The assumptions of the algorithm (big assumptions!) are that we can implement the time evolution operator and prepare a good approximation of the true ground state. Let me summarize how it works.
Given any input state, there is an induced probability distribution p(n) of sampling the n-th eigenvalue when performing a measurement in the eigenbasis of the Hamiltonian. The main insight is to focus on the cumulative distribution function (CDF) induced by the input state. The CDF adds up the probabilities p(n) up to a certain energy, so the function has “jumps” occurring at each eigenvalue. The algorithm is designed to reconstruct the CDF; we can then look at where the jumps occur and identify them as eigenvalues of the Hamiltonian.
The CDF (top) has jumps occuring at eigenvalues of the Hamiltonian. The size of the jump is proportional to the weight of the eigenstate in the input state. The algorithm then reconstructs a noisy version of the CDF. In this example we can distinguish eigenvalues with large jumps, but smaller ones are hidden by the noise.
The main idea is to express the CDF as a Fourier series, then evaluate the resulting sum using a Monte Carlo approach. For an input state |\psi\rangle, the Fourier expansion of the CDF is
where F_k denotes the Fourier coefficients and H is the Hamiltonian. We truncate the sum up to a cutoff d that corresponds to a maximum evolution time. This parameter should be chosen according to the constraints of the ISQ device.
We want to express this sum as the expectation value of a random variable so that it can be evaluated by computing a sample mean. We define four quantities:
- \theta_k: the argument of F_k, i.e., F_k = |F_k| e^{i\theta_k}
- F: the sum of absolute values of Fourier coefficients, F= \sum_k |F_k|
- Z_k: an abbreviation for the expectation value of the time evolution operator for time k, Z_k = \langle \psi | e^{iHk} | \psi\rangle
- P(k)=|F_k|/F: the probability to sample k when drawn proportionally to its corresponding Fourier coefficient.
Then it only takes a bit of math to express the CDF as an expectation value:
The algorithm estimates this through random sampling in a few steps:
- Sample k with probability P(k)
- Use the Hadamard test to compute the real and imaginary parts of Z_k using a single shot
- Repeat this N times to obtain a series of values (k^{(n)}, Z_k^{(n)}), n=1, 2, \ldots, N
- Compute the estimated CDF as C(x) = \frac{1}{N} \sum_{n=1}^N Z_k^{(n)} e^{i(k^{(n)}x+\theta_k^{(n)})}
One of the surprising properties of this algorithm is that its cost scales like 1/\epsilon instead of 1/\epsilon^2 with respect to the target error; i.e., it has Heisenberg-limited scaling.
What I enjoy the most about this result is that it is very different from other methods to compute ground-state energies. I recognize it as a signal that the algorithms that are most suitable for the ISQ era may be truly novel compared to anything developed so far. That’s exciting!
Next steps
Chances are that there will be many more advances in developing software and algorithms for ISQ devices. The whole point of this post is to invite you to participate in these efforts. We are already working in this direction here at Xanadu. We don’t know what these results will look like, but I can already identify some open questions:
- How good are these methods? Limited work has been done in estimating resources for ISQ algorithms and making a careful comparison with existing techniques. This effort can elucidate bottlenecks that lead to improvements.
- Most work on ISQ algorithms has focused on the simulation of quantum systems. What about other areas like quantum machine learning? Maybe factoring?
- In quantum simulation, many papers still assume access to a time-evolution operator. What are the best ways to actually perform Hamiltonian simulation on ISQ devices?
- Can we develop parallelization techniques suitable for ISQ devices to reduce total runtime by executing a quantum algorithm across many devices?
Building quantum computers is very difficult, but there are many teams of bright people working hard to achieve this goal. Those of us that do not spend our time in the lab have an opportunity to anticipate the next generation of quantum computers and prepare for using them to their full potential.
Papers to read
- Early fault-tolerant simulations of the Hubbard model
- Heisenberg-limited ground-state energy estimation for early fault-tolerant quantum computers
- Randomized Quantum Algorithm for Statistical Phase Estimation
- Ground-state preparation and energy estimation on early fault-tolerant quantum computers via quantum eigenvalue transformation of unitary matrices
- Quantum algorithm for ground state energy estimation using circuit depth with exponentially improved dependence on precision
About the author
Juan Miguel Arrazola
Making quantum computers useful