Linear Convergence in Games with Delayed Feedback via Extra Prediction
Abstract
Feedback delays are inevitable in real-world multi-agent learning. They are known to severely degrade performance, and the convergence rate under delayed feedback is still unclear, even for bilinear games. This paper derives the rate of linear convergence of Weighted Optimistic Gradient Descent-Ascent (WOGDA), which predicts future rewards with extra optimism, in unconstrained bilinear games. To analyze the algorithm, we interpret it as an approximation of the Extra Proximal Point (EPP), which is updated based on farther future rewards than the classical Proximal Point (PP). Our theorems show that standard optimism (predicting the next-step reward) achieves linear convergence to the equilibrium at a rate after iterations for delay . Moreover, employing extra optimism (predicting farther future reward) tolerates a larger step size and significantly accelerates the rate to . Our experiments also show accelerated convergence driven by the extra optimism and are qualitatively consistent with our theorems. In summary, this paper validates that extra optimism is a promising countermeasure against performance degradation caused by feedback delays.
Source: ArXiv.org - http://arxiv.org/abs/2602.17486v1 PDF: https://arxiv.org/pdf/2602.17486v1 Original Link: http://arxiv.org/abs/2602.17486v1