Learning Over-Relaxation Policies for ADMM with Convergence Guarantees
Abstract
The Alternating Direction Method of Multipliers (ADMM) is a widely used method for structured convex optimization, and its practical performance depends strongly on the choice of penalty and relaxation parameters. Motivated by settings such as Model Predictive Control (MPC), where one repeatedly solves related optimization problems with fixed structure and changing parameter values, we propose learning online updates of the relaxation parameter to improve performance on problem classes of intere...
Description / Details
The Alternating Direction Method of Multipliers (ADMM) is a widely used method for structured convex optimization, and its practical performance depends strongly on the choice of penalty and relaxation parameters. Motivated by settings such as Model Predictive Control (MPC), where one repeatedly solves related optimization problems with fixed structure and changing parameter values, we propose learning online updates of the relaxation parameter to improve performance on problem classes of interest. This choice is computationally attractive in OSQP-like architectures, since adapting relaxation does not trigger the matrix refactorizations associated with penalty updates. We establish convergence guarantees for ADMM with time-varying penalty and relaxation parameters under mild assumptions, and show on benchmark quadratic programs that the resulting learned policies improve both iteration count and wall-clock time over baseline OSQP.
Source: arXiv:2604.26932v1 - http://arxiv.org/abs/2604.26932v1 PDF: https://arxiv.org/pdf/2604.26932v1 Original Link: http://arxiv.org/abs/2604.26932v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 30, 2026
Data Science
Machine Learning
0