Back to Explorer
Research PaperResearchia:202601.07526400[Data Science > Data Science]

Learning Shrinks the Hard Tail: Training-Dependent Inference Scaling in a Solvable Linear Model

Noam Levi

Abstract

We analyze neural scaling laws in a solvable model of last-layer fine-tuning where targets have intrinsic, instance-heterogeneous difficulty. In our Latent Instance Difficulty (LID) model, each input's target variance is governed by a latent precision'' drawn from a heavy-tailed distribution. While generalization loss recovers standard scaling laws, our main contribution connects this to inference. The pass@$k$ failure rate exhibits a power-law decay, $k^{-β_\text{eff}}$, but the observed exponent $β_\text{eff}$ is training-dependent. It grows with sample size $N$ before saturating at an intrinsic limit $β$ set by the difficulty distribution's tail. This coupling reveals that learning shrinks the hard tail'' of the error distribution: improvements in the model's generalization error steepen the pass@kk curve until irreducible target variance dominates. The LID model yields testable, closed-form predictions for this behavior, including a compute-allocation rule that favors training before saturation and inference attempts after. We validate these predictions in simulations and in two real-data proxies: CIFAR-10H (human-label variance) and a maths teacher-student distillation task.

Submission:1/7/2026
Comments:0 comments
Subjects:Data Science; Data Science
Original Source:
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Learning Shrinks the Hard Tail: Training-Dependent Inference Scaling in a Solvable Linear Model | Researchia