Skip to content

Quantum linear system solvers

Rough overview (in words)

The goal is to solve linear systems of equations with quantum subroutines. More precisely, a quantum linear system solver (QLSS) takes as input an \(N\times N\) complex matrix \(A\) together with a complex vector \(b\) of size \(N\), and outputs a pure quantum state \(\ket{\tilde x}\) that is an \(\varepsilon\)-approximation of the normalized solution vector of the linear system of equations \(Ax=b\). In basic versions, QLSSs do so by loading the normalized entries of the matrix \(A\) and the normalized entries of the vector \(b\) into a unitary quantum circuit, either from a quantum random access memory (QRAM) data structure, or—if the structure of \(A\) and \(b\) allows for this—by efficiently computing the corresponding entries on the fly.

Crucially, the number of algorithmic qubits of the linear system solver itself is only roughly \(\log_2(N)\), which is exponentially smaller than the matrix size. While for general systems the number of QRAM qubits still scales with the matrix/vector size, QRAM encodings can be made more space efficient for sparse systems or can even be avoided when the corresponding entries are efficiently computable. The complexity of QLSSs depends on the condition number \(\kappa(A)=\left\|A^{-1}\right\|\cdot\|A\|\) of the matrix \(A\), and one then aims to give circuits with minimal quantum resource costs—such as ancilla qubits, total gate count, circuit depth, etc.—in terms of \(\kappa(A)\) and the desired accuracy \(\varepsilon\in(0,1)\).

Rough overview (in math)

There are different standard input models on how the classical data from \((A,b)\) is loaded into the quantum processing unit, which are equivalent up to small polylogarithmic overhead for general matrices. We state the complexities in terms of query access of a unitary \(U_b\) preparing the \(n=\lceil\log_2(N)\rceil\)-qubit pure quantum state \(\ket{b}=\|b\|_2^{-1}\cdot\sum_{i=1}^Nb_i\ket{i}\) for \(b=(b_1,\cdots,b_N)\), where \(\lVert \cdot \rVert_2\) denotes the standard Euclidean vector norm, together with an \((\alpha,a,0)\)-block-encoding \(U_A\) of the matrix \(A\). The QLSS problem is then stated as follows: for a triple \((U_A,U_b,\varepsilon)\) as above, the goal is to create an \(n\)-qubit pure quantum state \(\ket{\tilde x}\) such that

\[\begin{align} \label{eq:regression} \Big\|\ket{\tilde x}-\ket{x}\Big\|_2\leq\varepsilon\quad\text{for $\ket{x}=\frac{\sum_{i=1}^Nx_i\ket{i}}{\left\|\sum_{i=1}^Nx_i\ket{i}\right\|_2}$ defined by $A x=b$ with $x=(x_1,\ldots,x_N)$,} \end{align}\]

by employing as few times as possible the unitary operators \(U_A,U_b\), their inverses \(U_A^\dagger,U_b^\dagger\), controlled versions of \(U_A,U_b\), and additional quantum gates on potentially additional ancilla qubits.

One way to think of the QLSS problem is that we seek the matrix inverse \(A^{-1}\) and this can, e.g., be implemented by quantum singular value transformation (QSVT) acting on \(A\) (via its block-encoding) with a polynomial approximation of the inverse function on the interval \([\|A\|/\kappa(A),\|A\|]\). The complexity of the corresponding scheme thereby depends on the degree of the polynomial needed for a good approximation of the inverse function on the relevant interval, and as such on the condition number \(\kappa(A)\), the normalization factor \(\alpha\), and the approximation error \(\varepsilon\) of the resulting QLSS. In fact, it turns out that the complexity of most quantum algorithms depends on the following combined quantity

\[\begin{align} \label{eq:kappa-prime} \kappa'(A):=\kappa(A)\cdot\frac{\alpha}{\|A\|}=\alpha\cdot\|A^{-1}\|, \end{align}\]

which is no smaller than \(\kappa(A)\), because \(\alpha\geq \|A\|\) due to the unitarity of the block-encoding. Note that in QRAM-based implementations one naturally gets \(\alpha=\|A\|_F\), which then leads to linear complexity dependence on the Frobenius norm \(\|A\|_F\).

As noted in [1, 2], in general we need not assume that \(A\) is invertible nor that it is a square matrix, but can instead use the Moore–Penrose pseudoinverse \(A^+\) of the matrix to solve the problem \(\eqref{eq:regression}\) in a least-squares sense, in which case one needs to appropriately change the definition of \(\kappa(A)\) to \(\left\|A^{+}\right\|\cdot\|A\|\). In fact, the above QSVT-based approach directly solves this more general version of the problem [3].

Dominant resource cost (gates/qubits)

The state-of-the-art QLSS from [4] (for invertible matrices) does not directly employ the QSVT for the inverse function. Instead, it is based on discrete adiabatic methods together with quantum eigenstate filtering based on the QSVT for a minimax polynomial [5]. As above, the quantum algorithm assumes access to a block-encoding \(U_A\) of the matrix \(A\) with normalization factor \(\alpha\), operates on \(n+5\) qubits (plus the additional qubits used for the block-encoding, discussed in more detail below), succeeds with probability roughly \(1/2\), and uses \(Q\) controlled queries to each of \(U_A\) and \(U_A^\dagger\), and \(2Q\) queries to each of \(U_b\) and \(U_b^\dagger\), for

\[\begin{align} \label{eq:kappa-scaling} Q & = \kappa'(A)\Big(C + \ln(2\varepsilon^{-1})\Big) + \mathcal{O}\left( \sqrt{\kappa'(A)} \right)=\mathcal{O}\left( \kappa'(A)\log(\varepsilon^{-1}) \right) \end{align}\]

where the constant \(C\) comes from the quantitative adiabatic analysis, and there is an additional constant quantum gate overhead for each query round. The query complexity is asymptotically optimal in terms of \(\kappa(A)\) [6]. The adiabatic constant \(C\) can be rigorously bounded as \(C \leq \num{58617}\).1 Note that when \(C\) is this large, the corresponding term will actually dominate the \(\kappa'(A)\log(\varepsilon^{-1})\) term for practical scenarios. In recent work [7], a version of the adiabatic approach with asymptotic complexity \(\mathcal{O}\left( \kappa'(A)\log(\kappa\varepsilon^{-1}) \right)\) outperforms by close to an order of magnitude the asymptotically optimal scheme for up to \(\kappa\approx10^{32}\) in terms of finite quantum resource counts.

Other known QLSSs with asymptotically worse complexities are based on QSVT [3, 8] or linear combination of unitaries (LCU) [9], and are often combined with variable-time amplitude amplification (VTAA) [10, 11] for improved performance. While the known bounds on the asymptotic complexities of these methods are slightly worse with additional polylogarithmic factors, it remains open if finite size performance could be competitive (as the known upper bounds on the adiabatic constant \(C\) are quite large). Moreover, to date, these VTAA-based algorithms are the only variants that are proven to solve the generic least squares (pseudoinverse) problem while achieving a close-to-optimal asymptotic scaling.

Note that if the matrix \(A\) is given in a classical data structure in the computational basis, then standard ways to create the block-encoding \(U_A\) make use of a QRAM structure. For general (dense) matrices \(A\), the requirement is then size \(\mathcal{O}\left( N^2 \right)\) (number of qubits) with circuit depth \(\mathcal{O}\left( n \right)\) for each query — or alternatively, as few as \(\mathcal{O}\left( n \right)\) ancilla qubits could suffice, but at the expense of using \(\mathcal{O}\left( N^2 \right)\) circuit depth [12, 13]. Initializing the depth-efficient QRAM data structure will in general also take \(\mathcal{O}\left( N^2 \right)\) time. However, if \(A\) is sparse, either in the computational basis [14], Pauli basis [15], or any orthonormal basis with efficiently implementable basis transformation, there are more efficient direct constructions for block-encoding \(A\). Moreover, for Pauli basis access, there exist randomized QLSSs with complexity scaling as the \(L_1\)-norm of the Pauli coefficients [16], completely avoiding the use of block-encodings (and as such QRAM and ancilla qubits).

Caveats

QLSSs are an important subroutine for a variety of application areas of quantum algorithms. However, it is crucial to keep track of all the quantum and classical resources required and to compare these to state-of-the-art classical methods. In particular, the following factors should be taken into account:

  • The classical precomputation complexities for the eigenstate filtering routine are neglected, but can be kept efficient in practice [17].
  • The size of the adiabatic constant \(C\) is expected to be about an order of magnitude better than stated above, but at least in the asymptotically optimal approach not more than one order of magnitude [4].
  • When needed, the QRAM cost can be prohibitive, if it requires the full overhead of quantum error correction and fault tolerance [12], especially for QRAMs of maximum size \(\mathcal{O}\left( N^2 \right)\) qubits, required for general (dense) matrices.
  • In the formulation of the QLSS problem, the pure quantum state \(\ket{x}\) corresponds to the normalized solution vector of the linear system \(Ax=b\). While the normalization factor can be obtained as well, this comes at the price of added complexity scaling as \(\widetilde{\mathcal{O}}\left( n\kappa'(A)\varepsilon^{-1} \right)\) [2, Corollary 32].
  • QLSSs do not produce a classical description of the solution vector \(x\) or an approximation thereof, but rather the pure quantum state \(\ket{\tilde x}\). In order to obtain a classical approximation of the vector \(x\), one needs to combine QLSSs with pure state quantum tomography, which can be performed using \(\mathcal{O}\left( N\varepsilon^{-2} \right)\) samples. If \(\text{poly}(n)\) query-cost QRAM is also available, then the complexity can be quadratically improved in terms of the precision using optimized pure state tomography [18], or alternatively the overall complexity may be further improved using iterative refinement to \(\mathcal{O}\left( Ns (s+\frac{\kappa^2(A)}{\|A\|})\text{polylog}(N/\varepsilon) \right)\) as described in [19], where \(s\) is the maximum number of nonzero elements of \(A\) in any row or column.
  • The overall complexities \(\widetilde{\mathcal{O}}\left( N\kappa'(A)\varepsilon^{-1} \right)\) and \(\mathcal{O}\left( Ns (s+\frac{\kappa^2(A)}{\|A\|})\text{polylog}(N/\varepsilon) \right)\) (where we generously allow \(\text{poly}(n)\) query-cost QRAM) to obtain a classical description of the solution can be compared to classical textbook Gaussian elimination–based computation, which leads to complexity \(\mathcal{O}\left( N^3 \right)\) or more precisely \(\mathcal{O}\left( N^\omega \right)\) with \(\omega\in[2,2.372)\) denoting the matrix multiplication exponent. Further, QLSSs should also be compared with state-of-the-art randomized solvers. For example, the randomized Kaczmarz method with standard classical access to the matrix elements returns an \(\varepsilon\)-approximation of the vector \(x\), while scaling as \(\mathcal{O}\left( s\kappa_F^2(A)\log(\varepsilon^{-1}) \right)\) for \(s\) row-sparse matrices and \(\kappa_F(A)=\left\|A^{-1}\right\|\cdot\|A\|_F\). Moreover, if \(A\) is \(s\)-sparse and positive semidefinite (PSD), then using the conjugate gradient method one can obtain a solution in time \(\mathcal{O}\left( Ns\sqrt{\kappa(A)}\log(\varepsilon^{-1}) \right)\) [20, Chapter 10.2], which can be generalized to the least-squares problem (and thus non-Hermitian matrices) at the cost of a quadratically worse condition number dependence \(\mathcal{O}\left( Ns\kappa\log(\kappa(A)/\varepsilon) \right)\) by considering the modified equation \(A^\dagger A x = A^\dagger b\). As such, it seems that the QLSS may not provide a superquadratic speedup when a full classical solution is to be extracted, and even subquadratic speedups seem to be limited to a narrow parameter regime.
  • Quantum-inspired methods [21, 22] that start from a classical data structure intended to mimic QRAM—allowing to sample from probability distributions with probabilities proportional to the squared magnitudes of elements in a given row of \(A\)—give samples from an \(\varepsilon\)-approximation to the solution vector in (dimension free) complexity \(\mathcal{O}\left( \kappa_F^4(A)\kappa^2(A)\varepsilon^{-2} \right)\) [23, 22], and can be used to compute an approximate solution by repeated sampling. Note that while the required data structure is classical, it might still be prohibitively expensive to build when the matrix \(A\) is huge.
  • When it comes to classical methods, solvers that depend on the condition number are useful in practice whenever combined with preconditioners [24]. However, the performance of preconditioners is often only heuristic, and using preconditioners for QLSS is not (yet) explored in-depth [25, 26, 27].

Example use cases

Further reading

  • Original QLSS (termed HHL) [6]
  • For a recent overview discussion of QLSS, see [32]
  • State-of-the-art QLSS based on discrete adiabatic methods [4]

Bibliography

  1. Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quantum algorithm for data fitting. Physical Review Letters, 109(5):050505, 2012. arXiv: https://arxiv.org/abs/1204.5242. doi:10.1103/PhysRevLett.109.050505.

  2. Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. The power of block-encoded matrix powers: improved regression techniques via faster hamiltonian simulation. In Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP), 33:1–33:14. 2019. arXiv: https://arxiv.org/abs/1804.01973. doi:10.4230/LIPIcs.ICALP.2019.33.

  3. András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st ACM Symposium on the Theory of Computing (STOC), 193–204. 2019. arXiv: https://arxiv.org/abs/1806.01838. doi:10.1145/3313276.3316366.

  4. Pedro C.S. Costa, Dong An, Yuval R. Sanders, Yuan Su, Ryan Babbush, and Dominic W. Berry. Optimal scaling quantum linear-systems solver via discrete adiabatic theorem. PRX Quantum, 3:040303, 10 2022. arXiv: https://arxiv.org/abs/2111.08152. URL: https://link.aps.org/doi/10.1103/PRXQuantum.3.040303, doi:10.1103/PRXQuantum.3.040303.

  5. Lin Lin and Yu Tong. Optimal polynomial based quantum eigenstate filtering with application to solving quantum linear systems. Quantum, 4:361, 2020. arXiv: https://arxiv.org/abs/1910.14596. doi:10.22331/q-2020-11-11-361.

  6. Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Physical Review Letters, 103(15):150502, 2009. arXiv: https://arxiv.org/abs/0811.3171. doi:10.1103/PhysRevLett.103.150502.

  7. David Jennings, Matteo Lostaglio, Sam Pallister, Andrew T. Sornborger, and Yigit Subasi. Efficient quantum linear solver algorithm with detailed running costs. arXiv: https://arxiv.org/abs/2305.11352, 2023.

  8. John M. Martyn, Zane M. Rossi, Andrew K. Tan, and Isaac L. Chuang. Grand unification of quantum algorithms. Physical Review X, 2(4):040203, 2021. arXiv: https://arxiv.org/abs/2105.02859. doi:10.1103/PRXQuantum.2.040203.

  9. Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM Journal on Computing, 46(6):1920–1950, 2017. arXiv: https://arxiv.org/abs/1511.02306. doi:10.1137/16M1087072.

  10. Andris Ambainis. Variable time amplitude amplification and quantum algorithms for linear algebra problems. In Proceedings of the 29th Symposium on Theoretical Aspects of Computer Science (STACS), 636–647. 2012. arXiv: https://arxiv.org/abs/1010.4458. doi:10.4230/LIPIcs.STACS.2012.636.

  11. Andris Ambainis, Martins Kokainis, and Jevgēnijs Vihrovs. Improved algorithm and lower bound for variable time quantum search. arXiv: https://arxiv.org/abs/2302.06749, 2023.

  12. Connor T. Hann, Gideon Lee, S.M. Girvin, and Liang Jiang. Resilience of quantum random access memory to generic noise. PRX Quantum, 2:020311, 4 2021. arXiv: https://arxiv.org/abs/2012.05340. URL: https://link.aps.org/doi/10.1103/PRXQuantum.2.020311, doi:10.1103/PRXQuantum.2.020311.

  13. B. David Clader, Alexander M. Dalzell, Nikitas Stamatopoulos, Grant Salton, Mario Berta, and William J. Zeng. Quantum resources required to block-encode a matrix of classical data. IEEE Transactions on Quantum Engineering, 3:1–23, 2022. arXiv: https://arxiv.org/abs/2206.03505. doi:10.1109/TQE.2022.3231194.

  14. Olivia Di Matteo, Vlad Gheorghiu, and Michele Mosca. Fault-tolerant resource estimation of quantum random-access memories. IEEE Transactions on Quantum Engineering, 1:1–13, 2020. arXiv: https://arxiv.org/abs/1902.01329. doi:10.1109/TQE.2020.2965803.

  15. Kianna Wan. Exponentially faster implementations of select(h) for fermionic hamiltonians. Quantum, 2021. arXiv: https://arxiv.org/abs/2004.04170. doi:10.22331/q-2021-01-12-380.

  16. Samson Wang, Sam McArdle, and Mario Berta. Qubit-efficient randomized quantum algorithms for linear algebra. arXiv: https://arxiv.org/abs/2302.01873, 2023.

  17. Yulong Dong, Xiang Meng, K. Birgitta Whaley, and Lin Lin. Efficient phase-factor evaluation in quantum signal processing. Physical Review A, 103:042419, 2021. arXiv: https://arxiv.org/abs/2002.11649. doi:10.1103/PhysRevA.103.042419.

  18. Joran van Apeldoorn, Arjan Cornelissen, András Gilyén, and Giacomo Nannicini. Quantum tomography using state-preparation unitaries. In Proceedings of the 34th ACM-SIAM Symposium on Discrete Algorithms (SODA), 1265–1318. 2023. arXiv: https://arxiv.org/abs/2207.08800. doi:10.1137/1.9781611977554.ch47.

  19. Mohammadhossein Mohammadisiahroudi, Zeguan Wu, Brandon Augustino, Tamás Terlaky, and Arielle Carr. Quantum-enhanced regression analysis using state-of-the-art qlsas and qipms. In Proceedings of the IEEE/ACM 7th Symposium on Edge Computing, volume, 375–380. 2022. doi:10.1109/SEC54971.2022.00055.

  20. Wolfgang Hackbusch. Iterative solution of large sparse systems of equations. Volume 95 of Applied Mathematical Sciences. Springer, 2nd edition, 2016. doi:10.1007/978-3-319-28483-5.

  21. Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang. Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. In Proceedings of the 52nd ACM Symposium on the Theory of Computing (STOC), 387–400. 2020. arXiv: https://arxiv.org/abs/1910.06151. doi:10.1145/3357713.3384314.

  22. András Gilyén, Zhao Song, and Ewin Tang. An improved quantum-inspired algorithm for linear regression. Quantum, 6:754, 2022. arXiv: https://arxiv.org/abs/2009.07268. doi:10.22331/q-2022-06-30-754.

  23. Changpeng Shao and Ashley Montanaro. Faster quantum-inspired algorithms for solving linear systems. ACM Transactions on Quantum Computing, 2022. arXiv: https://arxiv.org/abs/2103.10309. doi:10.1145/3520141.

  24. Yousef Saad. Iterative Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics, second edition, 2003. doi:10.1137/1.9780898718003.

  25. B David Clader, Bryan C Jacobs, and Chad R Sprouse. Preconditioned quantum linear system algorithm. Physical Review Letters, 110(25):250504, 2013. arXiv: https://arxiv.org/abs/1301.2340. doi:10.1103/PhysRevLett.110.250504.

  26. Changpeng Shao and Hua Xiang. Quantum circulant preconditioner for a linear system of equations. Physical Review A, 98(6):062321, 2018. arXiv: https://arxiv.org/abs/1807.04563. doi:10.1103/PhysRevA.98.062321.

  27. Yu Tong, Dong An, Nathan Wiebe, and Lin Lin. Fast inversion, preconditioned quantum linear system solvers, fast green's-function computation, and fast evaluation of matrix functions. Physical Review A, 104:032422, 2021. arXiv: https://arxiv.org/abs/2008.13295. doi:10.1103/PhysRevA.104.032422.

  28. Iordanis Kerenidis and Anupam Prakash. A quantum interior point method for lps and sdps. ACM Transactions on Quantum Computing, 2020. arXiv: https://arxiv.org/abs/1808.09266. doi:10.1145/3406306.

  29. Mohammadhossein Mohammadisiahroudi, Ramin Fakhimi, and Tamás Terlaky. Efficient use of quantum linear system algorithms in interior point methods for linear optimization. arXiv: https://arxiv.org/abs/2205.01220, 2022.

  30. Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classification. Physical Review Letters, 113(13):130503, 2014. arXiv: https://arxiv.org/abs/1307.0471. doi:10.1103/PhysRevLett.113.130503.

  31. Ashley Montanaro and Sam Pallister. Quantum algorithms and the finite element method. Physical Review A, 93(3):032324, 2016. arXiv: https://arxiv.org/abs/1512.05903. doi:10.1103/PhysRevA.93.032324.

  32. Dong An and Lin Lin. Quantum linear system solver based on time-optimal adiabatic quantum computing and quantum approximate optimization algorithm. ACM Transactions on Quantum Computing, 2022. arXiv: https://arxiv.org/abs/1909.05500. doi:10.1145/3498331.


  1. This number is derived from applying [4, Theorem 9] with \(\sqrt{2-\sqrt{2}}\times \num{44864} \times \kappa\) steps, each of which incurs one call to the block-encoding, such that the output is guaranteed to have overlap at least \(1/\sqrt{2}\) with the ideal state. Eigenstate filtering then succeeds with probability at least \(1/2\); accounting for the need to repeat twice on average, one arrives at a constant \(\num{117235}\), matching [7, Eq. (L2)].