Descent-Guided Policy Gradient for Scalable Cooperative Multi-Agent Learning
Abstract
Scaling cooperative multi-agent reinforcement learning (MARL) is fundamentally limited by cross-agent noise: when agents share a common reward, the actions of all agents jointly determine each agent's learning signal, so cross-agent noise grows with . In the policy gradient setting, per-agent gradient estimate variance scales as , yielding sample complexity . We observe that many domains -- cloud computing, transportation, power systems -- have differentiable analytical models that prescribe efficient system states. In this work, we propose Descent-Guided Policy Gradient (DG-PG), a framework that constructs noise-free per-agent guidance gradients from these analytical models, decoupling each agent's gradient from the actions of all others. We prove that DG-PG reduces gradient variance from to , preserves the equilibria of the cooperative game, and achieves agent-independent sample complexity . On a heterogeneous cloud scheduling task with up to 200 agents, DG-PG converges within 10 episodes at every tested scale -- from to -- directly confirming the predicted scale-invariant complexity, while MAPPO and IPPO fail to converge under identical architectures.
Source: arXiv:2602.20078v1 - http://arxiv.org/abs/2602.20078v1 PDF: https://arxiv.org/pdf/2602.20078v1 Original Link: http://arxiv.org/abs/2602.20078v1