Back to Explorer
Research PaperResearchia:202602.04011[Mathematics > Mathematics]

The matrix-vector complexity of $Ax=b$

Michał Dereziński

Abstract

Matrix-vector algorithms, particularly Krylov subspace methods, are widely viewed as the most effective algorithms for solving large systems of linear equations. This paper establishes lower bounds on the worst-case number of matrix-vector products needed by such an algorithm to approximately solve a general linear system. The first main result is that, for a matrix-vector algorithm which can perform products with both a matrix and its transpose, Ω(κlog(1/ε))Ω(κ\log(1/\varepsilon)) matrix-vector products are necessary to solve a linear system with condition number κκ to accuracy ε\varepsilon, matching an upper bound for conjugate gradient on the normal equations. The second main result is that one-sided algorithms, which lack access to the transpose, must use nn matrix-vector products to solve an n×nn \times n linear system, even when the problem is perfectly conditioned. Both main results include explicit constants that match known upper bounds up to a factor of four. These results rigorously demonstrate the limitations of matrix-vector algorithms and confirm the optimality of widely used Krylov subspace algorithms.


Source: arXiv:2602.04842v1 - http://arxiv.org/abs/2602.04842v1 PDF: https://arxiv.org/pdf/2602.04842v1 Original Article: View on arXiv

Submission:2/4/2026
Comments:0 comments
Subjects:Mathematics; Mathematics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

The matrix-vector complexity of $Ax=b$ | Researchia | Researchia