Back to Explorer
Research PaperResearchia:202601.28027[Signal Processing > Engineering]

SA-PEF: Step-Ahead Partial Error Feedback for Efficient Federated Learning

Dawit Kiros Redie

Abstract

Biased gradient compression with error feedback (EF) reduces communication in federated learning (FL), but under non-IID data, the residual error can decay slowly, causing gradient mismatch and stalled progress in the early rounds. We propose step-ahead partial error feedback (SA-PEF), which integrates step-ahead (SA) correction with partial error feedback (PEF). SA-PEF recovers EF when the step-ahead coefficient α=0α=0 and step-ahead EF (SAEF) when α=1α=1. For non-convex objectives and δδ-contractive compressors, we establish a second-moment bound and a residual recursion that guarantee convergence to stationarity under heterogeneous data and partial client participation. The resulting rates match standard non-convex Fed-SGD guarantees up to constant factors, achieving O((η,η0TR)1)O((η,η_0TR)^{-1}) convergence to a variance/heterogeneity floor with a fixed inner step size. Our analysis reveals a step-ahead-controlled residual contraction ρrρ_r that explains the observed acceleration in the early training phase. To balance SAEF's rapid warm-up with EF's long-term stability, we select αα near its theory-predicted optimum. Experiments across diverse architectures and datasets show that SA-PEF consistently reaches target accuracy faster than EF.


Source: arXiv:2601.20738v1 - http://arxiv.org/abs/2601.20738v1 PDF: https://arxiv.org/pdf/2601.20738v1 Original Link: http://arxiv.org/abs/2601.20738v1

Submission:1/28/2026
Comments:0 comments
Subjects:Engineering; Signal Processing
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!