Back to Explorer
Research PaperResearchia:202602.11046[Data Science > Machine Learning]

Step-resolved data attribution for looped transformers

Georgios Kaissis

Abstract

We study how individual training examples shape the internal computation of looped transformers, where a shared block is applied for ττ recurrent iterations to enable latent reasoning. Existing training-data influence estimators such as TracIn yield a single scalar score that aggregates over all loop iterations, obscuring when during the recurrent computation a training example matters. We introduce \textit{Step-Decomposed Influence (SDI)}, which decomposes TracIn into a length-ττ influence trajectory by unrolling the recurrent computation graph and attributing influence to specific loop iterations. To make SDI practical at transformer scale, we propose a TensorSketch implementation that never materialises per-example gradients. Experiments on looped GPT-style models and algorithmic reasoning tasks show that SDI scales excellently, matches full-gradient baselines with low error and supports a broad range of data attribution and interpretability tasks with per-step insights into the latent reasoning process.


Source: arXiv:2602.10097v1 - http://arxiv.org/abs/2602.10097v1 PDF: https://arxiv.org/pdf/2602.10097v1 Original Link: http://arxiv.org/abs/2602.10097v1

Submission:2/11/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Step-resolved data attribution for looped transformers | Researchia | Researchia