Back to Explorer
Research PaperResearchia:202603.17030[Mathematics > Mathematics]

Saddle Point Evasion via Curvature-Regularized Gradient Dynamics

Liraz Mudrik

Abstract

Nonconvex optimization underlies many modern machine learning and control tasks, where saddle points pose the dominant obstacle to reliable convergence in high-dimensional settings. Escaping these saddle points deterministically and at a controllable rate remains an open challenge: gradient descent is blind to curvature, stochastic perturbation methods lack deterministic guarantees, and Newton-type approaches suffer from Hessian singularity. We present Curvature-Regularized Gradient Dynamics (CRGD), which augments the objective with a smooth penalty on the most negative Hessian eigenvalue, yielding an augmented cost that serves as an optimization Lyapunov function with user-selectable convergence rates to second-order stationary points. Numerical experiments on a nonconvex matrix factorization example confirm that CRGD escapes saddle points across all tested configurations, with escape time that decreases with the eigenvalue gap, in contrast to gradient descent, whose escape time grows inversely with the gap.


Source: arXiv:2603.15606v1 - http://arxiv.org/abs/2603.15606v1 PDF: https://arxiv.org/pdf/2603.15606v1 Original Link: http://arxiv.org/abs/2603.15606v1

Submission:3/17/2026
Comments:0 comments
Subjects:Mathematics; Mathematics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Saddle Point Evasion via Curvature-Regularized Gradient Dynamics | Researchia