Decentralized Learning with Dynamically Refined Edge Weights: A Data-Dependent Framework
Abstract
This paper aims to accelerate decentralized optimization by strategically designing the edge weights used in the agent-to-agent message exchanges. We propose a Dynamic Directed Decentralized Gradient (D3GD) framework and show that the proposed data-dependent framework is a practical alternative to the classical directed DGD (Di-DGD) algorithm for learning on directed graphs. To obtain a strategy for edge weights refinement, we derive a design function inspired by the cost-to-go function in a new convergence analysis for Di-DGD. This results in a data-dependent dynamical design for the edge weights. A fully decentralized version of D3GD is developed such that each agent refines its communication strategy using only neighbor's information. Numerical experiments show that D3GD accelerates convergence towards stationary solution by 30-40% over Di-DGD, and learns edge weights that adapt to data similarity.
Source: arXiv:2601.21355v1 - http://arxiv.org/abs/2601.21355v1 PDF: https://arxiv.org/pdf/2601.21355v1 Original Link: http://arxiv.org/abs/2601.21355v1