Back to Explorer
Research PaperResearchia:202602.26018[Data Science > Machine Learning]

Complexity of Classical Acceleration for $\ell_1$-Regularized PageRank

Kimon Fountoulakis

Abstract

We study the degree-weighted work required to compute 1\ell_1-regularized PageRank using the standard one-gradient-per-iteration accelerated proximal-gradient method (FISTA). For non-accelerated local methods, the best known worst-case work scales as O~((αρ)1)\widetilde{O} ((αρ)^{-1}), where αα is the teleportation parameter and ρρ is the 1\ell_1-regularization parameter. A natural question is whether FISTA can improve the dependence on αα from 1/α1/α to 1/α1/\sqrtα while preserving the 1/ρ1/ρ locality scaling. The challenge is that acceleration can break locality by transiently activating nodes that are zero at optimality, thereby increasing the cost of gradient evaluations. We analyze FISTA on a slightly over-regularized objective and show that, under a checkable confinement condition, all spurious activations remain inside a boundary set B\mathcal{B}. This yields a bound consisting of an accelerated (ρα)1log(α/ε)(ρ\sqrtα)^{-1}\log(α/\varepsilon) term plus a boundary overhead vol(B)/(ρα3/2)\sqrt{vol(\mathcal{B})}/(ρα^{3/2}). We provide graph-structural conditions that imply such confinement. Experiments on synthetic and real graphs show the resulting speedup and slowdown regimes under the degree-weighted work model.


Source: arXiv:2602.21138v1 - http://arxiv.org/abs/2602.21138v1 PDF: https://arxiv.org/pdf/2602.21138v1 Original Link: http://arxiv.org/abs/2602.21138v1

Submission:2/26/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!