Back to Explorer
Research PaperResearchia:202601.29011[Artificial Intelligence > Artificial Intelligence]

PRISM: Distribution-free Adaptive Computation of Matrix Functions for Accelerating Neural Network Training

Shenghao Yang

Abstract

Matrix functions such as square root, inverse roots, and orthogonalization play a central role in preconditioned gradient methods for neural network training. This has motivated the development of iterative algorithms that avoid explicit eigendecompositions and rely primarily on matrix multiplications, making them well suited for modern GPU accelerators. We present PRISM (Polynomial-fitting and Randomized Iterative Sketching for Matrix functions computation), a general framework for accelerating iterative algorithms for computing matrix functions. PRISM combines adaptive polynomial approximation with randomized sketching: at each iteration, it fits a polynomial surrogate to the current spectrum via a sketched least-squares problem, adapting to the instance at hand with minimal overhead. We apply PRISM to accelerate Newton-Schulz-like iterations for matrix square roots and orthogonalization, which are core primitives in machine learning. Unlike prior methods, PRISM requires no explicit spectral bounds or singular value estimates; and it adapts automatically to the evolving spectrum. Empirically, PRISM accelerates training when integrated into Shampoo and Muon optimizers.


Source: arXiv:2601.22137v1 - http://arxiv.org/abs/2601.22137v1 PDF: https://arxiv.org/pdf/2601.22137v1 Original Link: http://arxiv.org/abs/2601.22137v1

Submission:1/29/2026
Comments:0 comments
Subjects:Artificial Intelligence; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

PRISM: Distribution-free Adaptive Computation of Matrix Functions for Accelerating Neural Network Training | Researchia