SA-PEF: Step-Ahead Partial Error Feedback for Efficient Federated Learning
Abstract
Biased gradient compression with error feedback (EF) reduces communication in federated learning (FL), but under non-IID data, the residual error can decay slowly, causing gradient mismatch and stalled progress in the early rounds. We propose step-ahead partial error feedback (SA-PEF), which integrates step-ahead (SA) correction with partial error feedback (PEF). SA-PEF recovers EF when the step-ahead coefficient and step-ahead EF (SAEF) when . For non-convex objectives and -contractive compressors, we establish a second-moment bound and a residual recursion that guarantee convergence to stationarity under heterogeneous data and partial client participation. The resulting rates match standard non-convex Fed-SGD guarantees up to constant factors, achieving convergence to a variance/heterogeneity floor with a fixed inner step size. Our analysis reveals a step-ahead-controlled residual contraction that explains the observed acceleration in the early training phase. To balance SAEF's rapid warm-up with EF's long-term stability, we select near its theory-predicted optimum. Experiments across diverse architectures and datasets show that SA-PEF consistently reaches target accuracy faster than EF.
Source: arXiv:2601.20738v1 - http://arxiv.org/abs/2601.20738v1 PDF: https://arxiv.org/pdf/2601.20738v1 Original Link: http://arxiv.org/abs/2601.20738v1