A practical randomized trust-region method to escape saddle points in high dimension
Abstract
Without randomization, escaping the saddle points of requires at least pieces of information about (values, gradients, Hessian-vector products). With randomization, this can be reduced to a polylogarithmic dependence in . The prototypical algorithm to that effect is perturbed gradient descent (PGD): through sustained jitter, it reliably escapes strict saddle points. However, it also never settles: there is no convergence. What is more, PGD requires precise tuning based on Lipschitz constants and a preset target accuracy. To improve on this, we modify the time-tested trust-region method with truncated conjugate gradients (TR-tCG). Specifically, we randomize the initialization of tCG (the subproblem solver), and we prove that tCG automatically amplifies the randomization near saddles (to escape) and absorbs it near local minimizers (to converge). Saddle escape happens over several iterations. Accordingly, our analysis is multi-step, with several novelties. The proposed algorithm is practical: it essentially tracks the good behavior of TR-tCG, with three minute modifications and a single new hyperparameter (the noise scale ). We provide an implementation and numerical experiments.
Source: arXiv:2603.15494v1 - http://arxiv.org/abs/2603.15494v1 PDF: https://arxiv.org/pdf/2603.15494v1 Original Link: http://arxiv.org/abs/2603.15494v1