Back to Explorer
Research PaperResearchia:202604.11063[Data Science > Machine Learning]

Provably Adaptive Linear Approximation for the Shapley Value and Beyond

Weida Li

Abstract

The Shapley value, and its broader family of semi-values, has received much attention in various attribution problems. A fundamental and long-standing challenge is their efficient approximation, since exact computation generally requires an exponential number of utility queries in the number of players nn. To meet the challenges of large-scale applications, we explore the limits of efficiently approximating semi-values under a Θ(n)Θ(n) space constraint. Building upon a vector concentration inequality, we establish a theoretical framework that enables sharper query complexities for existing unbiased randomized algorithms. Within this framework, we systematically develop a linear-space algorithm that requires O(nε2log1δ)O(\frac{n}{ε^{2}}\log\frac{1}δ) utility queries to ensure P(φ^φ2ε)δP(\|\hat{\boldsymbolφ}-\boldsymbolφ\|_{2}\geqε)\leq δ for all commonly used semi-values. In particular, our framework naturally bridges OFA, unbiased kernelSHAP, SHAP-IQ and the regression-adjusted approach, and definitively characterizes when paired sampling is beneficial. Moreover, our algorithm allows explicit minimization of the mean square error for each specific utility function. Accordingly, we introduce the first adaptive, linear-time, linear-space randomized algorithm, Adalina, that theoretically achieves improved mean square error. All of our theoretical findings are experimentally validated.


Source: arXiv:2604.08438v1 - http://arxiv.org/abs/2604.08438v1 PDF: https://arxiv.org/pdf/2604.08438v1 Original Link: http://arxiv.org/abs/2604.08438v1

Submission:4/11/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!