Skip to content

Portfolio optimization

Overview

Given a set of possible assets into which one can invest, the problem of portfolio optimization (PO) involves finding the optimal allocation of funds into these assets so as to maximize returns while minimizing risk. The Markowitz model, as it is commonly called, is widely used in the financial industry, owing to its simplicity and broad applicability. Sophisticated constraints, transaction cost functions, and modifications to the problem can be used to model realistic, modern portfolio optimization problems. Numerically solving these optimization problems is a routine part of existing workflows in financial services operations. Several quantum approaches to solving the portfolio optimization problem have been proposed, each with their own advantages and drawbacks.

Actual end-to-end problem(s) solved

Consider a set of \(n\) investable assets with a fixed total budget. Define \(w_i \in \mathbb{R}\) to be the fraction of the total budget that is invested into asset \(i\). Thus, the \(n\)-dimensional vector \(w\) defines a portfolio. Let \(r\) be a known \(n\)-dimensional vector denoting the expected return for each of the available assets, i.e. the percentage by which the value of each asset is expected to grow over some defined time period. Let \(\Sigma\in\mathbb{R}^{n\times n}\) be the covariance matrix governing the random (and possibly correlated) fluctuations in the asset returns away from their mean \(r\). In practice, the input parameters \(\Sigma\) and \(r\) can be inferred from historical stock price data, or through more sophisticated analyses. The covariance matrix can be used to define a portfolio's "risk" \(w^\intercal \Sigma w\), which is precisely the variance in the returns it generates, assuming the underlying model is accurate. Denote the all-ones vector by \(\mathbf{1}\), and for any pair of vectors \(u,v\) let \(\langle u,v\rangle\) denote the standard inner product between \(u\) and \(v\). The goal of the Markowitz formulation of portfolio optimization is to find the optimal portfolio (i.e., vector of weights \(w\)) that either:

  • maximizes the expected return subject to a fixed risk parameter \(\sigma_0^2\)
    \[\begin{align} \max_{w}\; & \langle w,r\rangle \\ \mathrm{s.t.}\quad w^\intercal \Sigma w & =\sigma_0^2 \\ \quad \langle \mathbf{1}, w\rangle & =1 \end{align}\]
  • minimizes risk subject to a fixed return parameter \(r_0\)
    \[\begin{align}\label{eq:PO_fixed_return} \min_{w}\; & w^\intercal \Sigma w \\ \mathrm{s.t.}\quad \langle w,r\rangle & =r_0 \\ \langle \mathbf{1}, w\rangle & =1 \end{align}\]
  • maximizes return and minimizes risk with a tradeoff determined by a parameter known as the "risk aversion parameter" \(\lambda\):
    \[\begin{equation} \label{eq:quadratic_objective} \begin{align} \max_{w}\; \langle w,r \rangle- \lambda & w^\intercal \Sigma w \\ \mathrm{s.t.}\quad \langle\mathbf{1}, w\rangle & =1 \end{align} \end{equation}\]

    or the alternative for the square root of risk (standard deviation rather than variance)

    \[\begin{align} \max_{w}\; \langle w,r\rangle- q & \sqrt{w^\intercal \Sigma w} \\ \mathrm{s.t.}\quad \langle\mathbf{1}, w\rangle & =1, \end{align}\]

    where \(q\) plays the same role as \(\lambda\).

Typically, it is satisfactory to find a vector that optimizes the objective function up to additive error \(\epsilon\), for some prespecified value of \(\epsilon\).

When solving the above Markowitz model formulations of PO, the absence of inequality constraints leads to simpler optimization problems that can be solved with simpler approaches. For example, the optimization problem in Eq. \(\eqref{eq:PO_fixed_return}\) is a simple quadratic program without complicated constraints, for which one can derive a closed-form expression for \(w\) using Lagrange multipliers [1]. More general portfolio optimization problems that include practically relevant constraints (such as the simple "long-only" constraint \(w_i\geq 0\), which contrasts "short" positions in which \(w_i\) can be less than zero) cannot generically be solved analytically, and one needs to employ more sophisticated numerical solvers. Real-world portfolio optimization problems include a number of possible constraints (see [2] for a discussion), including, but not limited to:

  • Long only — \(w_j\geq 0, \quad \forall j\)
  • Investment bands — \(w_j\in [w_j^{\mathrm{min}}, w_j^{\mathrm{max}}]\)
  • Turnover constraints — \(|\Delta w_j|\leq U_j\) for some fixed fraction \(U_j\), where \(\Delta w_j\) represents the change in holdings of asset \(w_j\) from one portfolio to the next.
  • Cardinality constraints — minimum, maximum, or exact number of nonzero assets in the portfolio
  • Sector constraints — specified minimum and/or maximum allocations to groups of assets (e.g., the energy or healthcare sectors)
  • Transaction costs — typically represented as a function of \(|\Delta w_j|\), and often added as a term in the objective function rather than as a constraint

As is often the case with optimization problems, the problem formulation strongly affects the solution strategy and the problem "hardness." If the PO problem is unconstrained and continuous (i.e., each \(w_i\) is a real number), then the problem is relatively easy. If convex inequality constraints, such as the long-only or turnover constraints, are imposed, the problem is harder, but can still be tackled by relatively efficient methods for convex optimization. By contrast, if one discretizes the problem (so that \(w\) now represents an integer number of asset shares or lots being traded), or if one applies some of the constraints above (such as integer-valued constraints like cardinality), then the problem becomes nonconvex and considerably harder to solve. In general, with discrete constraints, the problem can be formulated as an instance of mixed-integer program (MIP), which is NP-complete and therefore intractable to solve in polynomial-time (in \(n\)) under widely believed assumptions. Alternatively, by encoding the integer variables in binary, it can be formulated as a quadratic unconstrained binary optimization (QUBO) instance. These formulations allow quantum algorithms for combinatorial optimization to be employed; for example, the MIP formulation can be solved with a branch-and-bound approach [3], and the QUBO formulation can be solved via Grover-type methods, or heuristically through (NISQ-friendly) quantum annealing approaches (e.g., [4]).

Dominant resource cost/complexity

An early approach to solving this optimization problem using a quantum algorithm was presented in [5], in which the Markowitz problem is written as minimizing risk with fixed return (Eq. \(\eqref{eq:PO_fixed_return}\)), and without other complicated constraints. This simple optimization problem boils down to an equality constrained convex program; it can be solved by introducing Lagrange multipliers and solving a linear system involving the input data \(r\) and \(\Sigma\) [5]. The approach of [5] is to use a quantum linear system solver (QLSS) and prepare the quantum state \(\ket{w}\) whose amplitudes are proportional to the optimal weights \(w_i\). The complexity to do so to error \(\epsilon\) is \(\widetilde{\mathcal{O}}\left( \kappa\zeta\log(1/\epsilon) \right)\), where \(\kappa\) is the condition number of the matrix \(G\) being inverted and \(\zeta = \lVert G \rVert_F/\lVert G \rVert\) is the ratio of its Frobenius norm to its spectral norm. The \(\tilde{\mathcal{O}}\) suppresses logarithmic factors, including a factor coming from applying unitaries that block-encode the matrix \(G\) in \(\mathrm{polylog}(n)\) depth, essentially equivalent to the assumption that log-depth quantum random access memory (QRAM) is available. It is a priori unclear what the value of \(\kappa\) and \(\zeta\) would be for actual PO instances and whether they depend on \(n\), but the explicit logarithmic dependence of this complexity on \(n\) is appealing. However, a drawback of this approach is that it produces the quantum state \(\ket{w}\) rather than an estimate for the optimal portfolio \(w\). Learning the \(n\) entries of \(w\) to precision \(\epsilon\) in 2-norm incurs multiplicative overhead of \(\widetilde{\mathcal{O}}\left( n/\epsilon \right)\) using quantum pure-state tomography [6] for total time complexity \(\widetilde{\mathcal{O}}\left( n\kappa\zeta/\epsilon \right)\).1

When convex linear inequality constraints, such as long-only or turnover constraints, are included, the above approach will not work. However, a more sophisticated method can be applied, which first maps the PO instance to a convex program (specifically, a second-order cone program (SOCP)) and then makes use of interior point methods to solve the program. These interior point methods can be quantized, forming quantum interior point methods (QIPMs) [7, 8]. The QIPM is an iterative method, where each iteration involves solving a linear equation with a QLSS and classically reading out the solution with tomography. Thus, the procedure within each iteration is similar to the procedure above for solving the unconstrained PO problem, but the linear system to be solved is different (and changes with each iteration). A preliminary study of the effectiveness of this approach for PO was given by [9], and a more extensive study later appeared in [10]. The QIPM produces an \(\epsilon\)-optimal classical estimate for \(w\), and has time complexity \(\tilde{\mathcal{O}}(n^{1.5}\frac{\zeta\kappa}{\xi}\log(1/\epsilon))\), where \(\kappa\) and \(\zeta\) are the maximum condition number and Frobenius-to-spectral-norm ratio for the matrices that must be inverted over the course of the algorithm, respectively, and \(\xi\) is the precision to which tomography must be performed. Note that in principle \(\xi\) can stay constant even as the overall precision estimate \(\epsilon \rightarrow 0\) [10].

With the addition of discrete constraints, PO is instead formulated as a nonconvex MIP. MIPs are typically solved with a branch-and-bound approach (for a summary in a financial context, see, e.g., [11, Chapter 11]). Key to this approach is the ability to solve convex relaxations of the MIP where the discrete constraints are dropped in \(\mathrm{poly}(n)\) time (perhaps via classical or quantum interior point methods for SOCPs, as above). To impose the discrete constraints, a tree is constructed and explored, where generating the children of a given node in the tree requires solving one of these relaxations. Thus, the number of convex relaxations that must be solved is proportional to the tree size \(T\), which is generally exponentially large in \(n\). Reference [3] (extending prior work of [12]) showed that a quantum algorithm can produce the same output while exploring quadratically fewer nodes, solving roughly \(\tilde{\mathcal{O}}(\sqrt{T})\) convex relaxations (but doing so coherently, which could introduce overheads), for a total complexity of \(\tilde{\mathcal{O}}(\sqrt{T}) \cdot \mathrm{poly}(n)\). The value of \(T\) is instance dependent and requires empirical estimation: a preliminary numerical analysis of the value of \(T\) for a certain ensemble of PO instances up to \(n=56\) found that \(T\sim 2^{0.14n}\) to \(2^{0.20n}\) [3].

The assessment of the number of qubits used by these algorithms requires a nuanced discussion of data loading. A key feature of all of the approaches above is that they require (repeatedly) accessing the classical data representing the historical stock information (i.e., the returns \(r\) and the covariance matrix \(\Sigma\)) into the quantum algorithm. The size of this data is typically \(\mathcal{O}\left( n^2 \right)\). Loading can be performed using block-encodings and QRAM, which achieves \(\mathcal{O}\left( \log(n) \right)\) depth (time), at the expense of \(\mathcal{O}\left( n^2 \right)\) space. Here, several caveats are inherited from the QRAM primitive. Moreover, for practical values of \(n\), this \(\mathcal{O}\left( n^2 \right)\) space cost could be prohibitively large, although it is possible this space cost could manifest as a dedicated QRAM hardware element of the device, rather than as part of the main processor. If log-depth QRAM of sufficient size is not desired or not available, the data could instead be loaded with only \(\mathcal{O}\left( \log(n) \right)\) space and in \(\mathcal{O}\left( n^2 \right)\) time, but this overhead in time would likely preclude the possibility of quantum speedup at least in the first two cases, where the formulation is convex and classical \(\mathrm{poly}(n)\)-time algorithms exist.

Existing error corrected resource estimates

A detailed, end-to-end resource analysis of the PO problem using QIPMs was performed in [10]. The authors followed the approach of [9] and performed a careful accounting of all quantum resources, including constant prefactors. The authors found that one needs \(800n^2\) logical qubits, a \(T\)-depth of

\[\begin{equation} (2\times 10^8)\kappa\zeta n^{1.5}\xi^{-2}\log_2(n)\log_2(\epsilon^{-1})\log_2(\kappa\zeta n^{14/27}\xi^{-1}), \end{equation}\]

and a \(T\)-count of

\[\begin{equation} (7\times 10^{11})\kappa\zeta n^{3.5}\xi^{-2}\log_2(n)\log_2(\epsilon^{-1})\log_2(\kappa\zeta \xi^{-1}), \end{equation}\]

where \(\kappa\) is the maximum condition number encountered in the algorithm, \(\zeta\) is the maximum Frobenius-to-spectral-norm ratio, and \(\xi\) is the minimum tomographic precision required. The \(\xi^{-2}\) dependence can asymptotically be improved to \(\xi^{-1}\) at the expense of a more sophisticated protocol for tomography [6]. Note also that this calculation incorporated optimized circuits for block-encoding with \(\mathcal{O}\left( \log(n) \right)\) \(T\)-depth but \(\mathcal{O}\left( n^2 \right)\) \(T\)-count [13], leading to the large discrepancy between those two quantities. The authors performed numerical simulations of portfolio optimization instances to determine the instance-specific quantities. Using numerically determined values for \(\kappa\zeta\) and \(\xi\), and using realistic values of \(\epsilon=10^{-7}\) and \(n=100\), these resource counts imply that one would need \(8\times 10^6\) logical qubits, \(2\times 10^{24}\) \(T\)-depth, and \(8\times 10^{29}\) \(T\)-count. These logical estimates for the number of non-Clifford gates could in principle be turned into estimates for the number of physical qubits and runtime on actual hardware, using the methods discussed in the page on fault-tolerant quantum computation. However, the authors of [10] did not do so, in part because the logical costs were sufficiently high that the qualitative conclusion about the practicality of the algorithm was already clear.

Caveats

The quantum algorithms for PO discussed above inherit many of the caveats of their underlying primitives, namely QLSS, tomography, and classical data loading. One salient caveat is that the QLSS-based approaches depend on a number of instance-specific parameters \(\kappa, \zeta,\xi\), which are difficult to predict without running numerical simulations. The asymptotic speedup is subject to assumptions about the scaling of these parameters. Additionally, for a speedup to be possible, log-depth QRAM must be available on large datasets, which, while theoretically possible, presents practical challenges.

The branch-and-bound approach does not require log-depth QRAM to achieve its nearly quadratic speedup since the runtime will be dominated by the exponential tree-size factor (although it would help to have fast QRAM to reduce by \(\mathrm{poly}(n)\) factors the time needed to solve the convex relaxations at each step). However, a caveat to that approach is that to obtain the quadratic speedup, the convex relaxations of the MIP (which would be SOCPs), would need to be solved coherently. In principle, this is always possible, but it would likely require a substantial amount of coherent classical arithmetic and additional \(\mathrm{poly}(n)\) overheads in time and space.

Comparable classical complexity and challenging instance sizes

Convex formulations of the PO problem are typically solved classically via mapping to SOCP. Optimized software packages can solve these SOCPs efficiently, and many are based on interior point methods. These interior point methods have theoretical runtime complexity of roughly \(\tilde{O}(n^{\omega+0.5}\log(1/\epsilon))\), where \(\omega\approx 2.373\) is the matrix multiplication exponent, although for practical instance sizes, the effective value of \(\omega\) is typically closer to 3. Note that the example PO problem with 100 assets solved in [10] and described above can typically be solved within seconds on a laptop using traditional classical methods. Problem sizes found in the financial services industry can include as many as tens of thousands of assets.

In the MIP formulation of PO, classical solutions will have complexity exponential in \(n\). As a point of reference, the numerical experiments reported in [3] classically solved hundreds of PO instances up to size \(n=56\) (and likely could have gone significantly higher).

Speedup

Recall that the QIPMs used to solve the SOCP for constrained PO are virtually identical to their classical counterpart; they differ by their use of a quantum subroutine to solve linear systems. Thus, any speedup obtained by the quantum approach to solving the SOCP will necessarily come from speedups from the QLSS plus tomography approach to solving a linear system. The approach for unconstrained PO was also based on the same primitives. The performance of the quantum method is often compared against classical Gaussian elimination. However, since the quantum approach necessarily produces an approximate solver (due to tomography), another valid comparison to make is against approximate classical solvers, such as the randomized Kaczmarz method [14]. In this case, the classical complexity for solving an \(L\times L\) linear system to precision \(\xi\) scales as \(\mathcal{O}(L\kappa^2 \zeta^2\log(\xi^{-1}))\) (where \(\kappa\) is the condition number and \(\zeta\) the Frobenius-to-spectral norm ratio) compared to \(\mathcal{O}(L^3)\) for Gaussian elimination (asymptotically \(\mathcal{O}(L^{\omega})\)). Thus, the quantum method provides the greatest speedup when \(\kappa\zeta\propto L\) and \(\xi=\mathcal{O}\left( 1 \right)\), in which case the QIPM for constrained PO runtime scales as \(\widetilde{\mathcal{O}}\left( n^{2.5} \right)\), whereas the classical runtimes scales as \(\widetilde{\mathcal{O}}\left( n^{3.5} \right)\), where \(n\) is the number of stocks in the portfolio (see [10, Table XI] for a more complete discussion). For unconstrained PO, which only requires solving one linear system, the comparison would be \(\widetilde{\mathcal{O}}\left( n^2 \right)\) vs. \(\widetilde{\mathcal{O}}\left( n^3 \right)\). In either case, the speedup is subquadratic. Moreover, the numerical simulations in [10] were not consistent with these optimistic assumptions on \(\kappa \zeta\) and \(\xi\), suggesting, rather, that the QIPM would have minimal if any speedup over classical IPMs, albeit based on small instance sizes up to \(n=120\).

The speedup for the branch-and-bound approach to the MIP formulation of PO is quadratic (up to log factors), although, as mentioned, in contrast to the convex formulations, both the quantum and classical algorithms generally have runtime exponential in \(n\).

NISQ implementations

Several alternative approaches to portfolio optimization using quantum solutions have been proposed.

  • NISQ-HHL [15]. This work generalizes the algorithm of [5], described above, by employing mid-circuit measurements and conditional logic to obtain a NISQ version of the QLSS that readily solves the PO problem.
  • QAOA approaches: [16, 17, 18] . These approaches typically use the quadratic objective function from Eq. \(\eqref{eq:quadratic_objective}\), but instead consider \(w_i \in \{0,1\}\) as binary variables indicating whether or not an asset is part of the portfolio (a substantial deviation from the normal formulation). Constraints are dealt with by adding penalties to the objective function. Alternatively, constraints are enforced by choosing clever versions of the ansatz [19] or by making measurements to project into the feasible space [17].
  • Quantum annealing approaches: [20, 21, 22, 23, 4]. As in the previous case, these approaches require the problem to be formulated as a binary optimization problem. However, in this case, they typically take the MIP formulation and encode integers in binary through one of several possible encodings [20] (thus, the number of binary variables will be greater than \(n\)). Constraints in the PO problem can also be included in the objective function using a variety of tricks, resulting in the desired QUBO, which can then be solved using a quantum annealer.

Outlook

The QIPM approach (and QLSS-based techniques more generally) for continuous formulations of PO have the potential to offer polynomial (but subquadratic) speedups for the PO problem. However, these speedups are subject to conjectures about the scaling of certain instance-specific parameters and preliminary empirical estimates are not suggestive of a maximal speedup. In any regard, the resource estimates of [10] illustrate that the non-Clifford resources required to implement the QIPM for this use case are prohibitive, even at problem sizes that are trivial to solve with classical computers. An asymptotic quantum advantage for this problem could exist for sufficiently large sets of assets, but without drastic improvements to the quantum algorithm and the underlying primitives (e.g., QRAM, QLSS), it is unlikely this approach will be fruitful. Even if such improvements are made, the algorithm only provides a polynomial speedup that is subquadratic, at best, greatly limiting the upside potential of this approach.

The branch-and-bound approach for discrete formulations has a possible for a larger quadratic speedup, but, as has been observed (see, e.g., [24, 25]) in the context of Grover-like quadratic speedups in combinatorial optimization, it is unclear whether the quadratic speedup is sufficient to overcome the inherently slower quantum clock speeds and overheads due to fault-tolerant quantum computation for practical instance sizes.

Bibliography

  1. Robert C Merton. An analytic derivation of the efficient portfolio frontier. Journal of Financial and Quantitative Analysis, 7(4):1851–1872, 1972. doi:10.2307/2329621.

  2. MOSEK ApS. Mosek portfolio optimization cookbook: release 1.3.0. 11 2023. https://docs.mosek.com/MOSEKPortfolioCookbook-a4paper.pdf, accessed: 2023-10-04.

  3. Shouvanik Chakrabarti, Pierre Minssen, Romina Yalovetzky, and Marco Pistoia. Universal quantum speedup for branch-and-bound, branch-and-cut, and tree-search algorithms. arXiv: https://arxiv.org/abs/2210.03210, 2022.

  4. Samuel Mugel, Carlos Kuchkovsky, Escolastico Sanchez, Samuel Fernandez-Lorenzo, Jorge Luis-Hita, Enrique Lizaso, and Roman Orus. Dynamic portfolio optimization with real datasets using quantum processors and quantum-inspired tensor networks. Physical Review Research, 4(1):013006, 2022. arXiv: https://arxiv.org/abs/2007.00017. doi:10.1103/PhysRevResearch.4.013006.

  5. Patrick Rebentrost and Seth Lloyd. Quantum computational finance: quantum algorithm for portfolio optimization. arXiv: https://arxiv.org/abs/1811.03975, 2018.

  6. Joran van Apeldoorn, Arjan Cornelissen, András Gilyén, and Giacomo Nannicini. Quantum tomography using state-preparation unitaries. In Proceedings of the 34th ACM-SIAM Symposium on Discrete Algorithms (SODA), 1265–1318. 2023. arXiv: https://arxiv.org/abs/2207.08800. doi:10.1137/1.9781611977554.ch47.

  7. Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for second-order cone programming and support vector machines. Quantum, 5:427, 2021. arXiv: https://arxiv.org/abs/1908.06720. doi:10.22331/q-2021-04-08-427.

  8. Brandon Augustino, Tamás Terlaky, and Luis F Zuluaga. An inexact-feasible quantum interior point method for second-order cone optimization. Technical Report 21T-009, Department of Industrial and Systems Engineering, Lehigh University, 2022.

  9. Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for portfolio optimization. In Proceedings of the 1st ACM Conference on Advances in Financial Technologies, AFT '19, 147–155. New York, NY, USA, 2019. Association for Computing Machinery. arXiv: https://arxiv.org/abs/1908.08040. URL: https://doi.org/10.1145/3318041.3355465, doi:10.1145/3318041.3355465.

  10. Alexander M Dalzell, B David Clader, Grant Salton, Mario Berta, Cedric Yen-Yu Lin, David A Bader, Nikitas Stamatopoulos, Martin J A Schuetz, Fernando G S L Brandão, Helmut G Katzgraber, and others. End-to-end resource analysis for quantum interior point methods and portfolio optimization. PRX Quantum, pages to appear, 2023. arXiv: https://arxiv.org/abs/2211.12489.

  11. Gerard Cornuejols and Reha Tütüncü. Optimization methods in finance. Volume 5. Cambridge University Press, 2006.

  12. Ashley Montanaro. Quantum speedup of branch-and-bound algorithms. Physical Review Research, 2(1):013056, 2020. arXiv: https://arxiv.org/abs/1906.10375. doi:10.1103/PhysRevResearch.2.013056.

  13. B. David Clader, Alexander M. Dalzell, Nikitas Stamatopoulos, Grant Salton, Mario Berta, and William J. Zeng. Quantum resources required to block-encode a matrix of classical data. IEEE Transactions on Quantum Engineering, 3:1–23, 2022. arXiv: https://arxiv.org/abs/2206.03505. doi:10.1109/TQE.2022.3231194.

  14. Thomas Strohmer and Roman Vershynin. A randomized kaczmarz algorithm with exponential convergence. Journal of Fourier Analysis and Applications, 15(2):262–278, 2009. arXiv: https://arxiv.org/abs/math/0702226. doi:10.1007/s00041-008-9030-4.

  15. Romina Yalovetzky, Pierre Minssen, Dylan Herman, and Marco Pistoia. Nisq-hhl: portfolio optimization for near-term quantum hardware. arXiv: https://arxiv.org/abs/2110.15958, 2021.

  16. Sebastian Brandhofer, Daniel Braun, Vanessa Dehn, Gerhard Hellstern, Matthias Hüls, Yanjun Ji, Ilia Polian, Amandeep Singh Bhatia, and Thomas Wellens. Benchmarking the performance of portfolio optimization with qaoa. Quantum Information Processing, 22(1):1–27, 2023. arXiv: https://arxiv.org/abs/2207.10555. doi:10.1007/s11128-022-03766-5.

  17. Dylan Herman, Ruslan Shaydulin, Yue Sun, Shouvanik Chakrabarti, Shaohan Hu, Pierre Minssen, Arthur Rattew, Romina Yalovetzky, and Marco Pistoia. Portfolio optimization via quantum zeno dynamics on a quantum processor. arXiv: https://arxiv.org/abs/2209.15024, 2022.

  18. Jack S Baker and Santosh Kumar Radha. Wasserstein solution quality and the quantum approximate optimization algorithm: a portfolio optimization case study. arXiv: https://arxiv.org/abs/2202.06782, 2022.

  19. Pradeep Niroula, Ruslan Shaydulin, Romina Yalovetzky, Pierre Minssen, Dylan Herman, Shaohan Hu, and Marco Pistoia. Constrained quantum optimization for extractive summarization on a trapped-ion quantum computer. Scientific Reports, 12(1):1–14, 2022. arXiv: https://arxiv.org/abs/2206.06290. doi:10.1038/s41598-022-20853-w.

  20. Gili Rosenberg, Poya Haghnegahdar, Phil Goddard, Peter Carr, Kesheng Wu, and Marcos López de Prado. Solving the optimal trading trajectory problem using a quantum annealer. IEEE Journal of Selected Topics in Signal Processing, 10(6):1053–1060, 2016. Earlier version in WHPCF'15, arXiv: https://arxiv.org/abs/1508.06182. doi:10.1109/JSTSP.2016.2574703.

  21. Samuel Palmer, Konstantinos Karagiannis, Adam Florence, Asier Rodriguez, Roman Orus, Harish Naik, and Samuel Mugel. Financial index tracking via quantum computing with cardinality constraints. arXiv: https://arxiv.org/abs/2208.11380, 2022.

  22. Samuel Palmer, Serkan Sahin, Rodrigo Hernandez, Samuel Mugel, and Roman Orus. Quantum portfolio optimization with investment bands and target volatility. arXiv: https://arxiv.org/abs/2106.06735, 2021.

  23. Erica Grant, Travis S Humble, and Benjamin Stump. Benchmarking quantum annealing controls with portfolio optimization. Physical Review Applied, 15(1):014012, 2021. arXiv: https://arxiv.org/abs/2007.03005. doi:10.1103/PhysRevApplied.15.014012.

  24. Earl Campbell, Ankur Khurana, and Ashley Montanaro. Applying quantum algorithms to constraint satisfaction problems. Quantum, 3:167, 2019. arXiv: https://arxiv.org/abs/1810.05582. doi:10.22331/q-2019-07-18-167.

  25. Ryan Babbush, Jarrod R. McClean, Michael Newman, Craig Gidney, Sergio Boixo, and Hartmut Neven. Focus beyond quadratic speedups for error-corrected quantum advantage. PRX Quantum, 2(1):010103, 3 2021. arXiv: https://arxiv.org/abs/2011.04149. doi:10.1103/PRXQuantum.2.010103.

  26. Emanuel Knill, Gerardo Ortiz, and Rolando D. Somma. Optimal quantum measurements of expectation values of observables. Physical Review A, 75:012328, 1 2007. arXiv: https://arxiv.org/abs/quant-ph/0607019. URL: https://link.aps.org/doi/10.1103/PhysRevA.75.012328, doi:10.1103/PhysRevA.75.012328.


  1. Reference [5] suggests several possible nonstandard problems that can be solved with \(\ket{w}\) without actually learning the entries of \(w\), such as sampling values of \(i\) with large \(|w_i|\), and estimating overlaps \(\langle \tilde{w}, w \rangle\) with hypothesized portfolios \(\tilde{w}\). In general, inner products \(\langle u, w \rangle\) of arbitrary normalized vectors \(u\) with \(w\) can be learned to precision \(\epsilon\) using overlap estimation [26] (an application of amplitude estimation), incurring multiplicative overhead of \(\mathcal{O}\left( 1/\epsilon \right)\), but no explicit linear-in-\(n\) dependence. However, the practical utility of such tasks within the existing workflows of financial institutions is unclear.