The matrix-vector complexity of $Ax=b$
Abstract
Matrix-vector algorithms, particularly Krylov subspace methods, are widely viewed as the most effective algorithms for solving large systems of linear equations. This paper establishes lower bounds on the worst-case number of matrix-vector products needed by such an algorithm to approximately solve a general linear system. The first main result is that, for a matrix-vector algorithm which can perform products with both a matrix and its transpose, matrix-vector products are necessary to solve a linear system with condition number to accuracy , matching an upper bound for conjugate gradient on the normal equations. The second main result is that one-sided algorithms, which lack access to the transpose, must use matrix-vector products to solve an linear system, even when the problem is perfectly conditioned. Both main results include explicit constants that match known upper bounds up to a factor of four. These results rigorously demonstrate the limitations of matrix-vector algorithms and confirm the optimality of widely used Krylov subspace algorithms.
Source: arXiv:2602.04842v1 - http://arxiv.org/abs/2602.04842v1 PDF: https://arxiv.org/pdf/2602.04842v1 Original Article: View on arXiv