Stochastic Trust-Region Methods for Over-parameterized Models
Abstract
Under interpolation-type assumptions such as the strong growth condition, stochastic optimization methods can attain convergence rates comparable to full-batch methods, but their performance, particularly for SGD, remains highly sensitive to step-size selection. To address this issue, we propose a unified stochastic trust-region framework that eliminates manual step-size tuning and extends naturally to equality-constrained problems. For unconstrained optimization, we develop a first-order stocha...
Description / Details
Under interpolation-type assumptions such as the strong growth condition, stochastic optimization methods can attain convergence rates comparable to full-batch methods, but their performance, particularly for SGD, remains highly sensitive to step-size selection. To address this issue, we propose a unified stochastic trust-region framework that eliminates manual step-size tuning and extends naturally to equality-constrained problems. For unconstrained optimization, we develop a first-order stochastic trust-region algorithm and show that, under the strong growth condition, it achieves an iteration and stochastic first-order oracle complexity of for finding an -stationary point. For equality-constrained problems, we introduce a quadratic-penalty-based stochastic trust-region method with penalty parameter , and establish an iteration and oracle complexity of to reach an -stationary point of the penalized problem, corresponding to an -approximate KKT point of the original constrained problem. Numerical experiments on deep neural network training and orthogonally constrained subspace fitting demonstrate that the proposed methods achieve performance comparable to well-tuned stochastic baselines, while exhibiting stable optimization behavior and effectively handling hard constraints without manual learning-rate scheduling.
Source: arXiv:2604.14017v1 - http://arxiv.org/abs/2604.14017v1 PDF: https://arxiv.org/pdf/2604.14017v1 Original Link: http://arxiv.org/abs/2604.14017v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 16, 2026
Mathematics
Mathematics
0