Back to Explorer
Research PaperResearchia:202601.29070[Computational Linguistics > NLP]

Mechanistic Data Attribution: Tracing the Training Origins of Interpretable LLM Units

Jianhui Chen

Abstract

While Mechanistic Interpretability has identified interpretable circuits in LLMs, their causal origins in training data remain elusive. We introduce Mechanistic Data Attribution (MDA), a scalable framework that employs Influence Functions to trace interpretable units back to specific training samples. Through extensive experiments on the Pythia family, we causally validate that targeted intervention--removing or augmenting a small fraction of high-influence samples--significantly modulates the emergence of interpretable heads, whereas random interventions show no effect. Our analysis reveals that repetitive structural data (e.g., LaTeX, XML) acts as a mechanistic catalyst. Furthermore, we observe that interventions targeting induction head formation induce a concurrent change in the model's in-context learning (ICL) capability. This provides direct causal evidence for the long-standing hypothesis regarding the functional link between induction heads and ICL. Finally, we propose a mechanistic data augmentation pipeline that consistently accelerates circuit convergence across model scales, providing a principled methodology for steering the developmental trajectories of LLMs.


Source: arXiv:2601.21996v1 - http://arxiv.org/abs/2601.21996v1 PDF: https://arxiv.org/pdf/2601.21996v1 Original Link: http://arxiv.org/abs/2601.21996v1

Submission:1/29/2026
Comments:0 comments
Subjects:NLP; Computational Linguistics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Mechanistic Data Attribution: Tracing the Training Origins of Interpretable LLM Units | Researchia