How Fast Should a Model Commit to Supervision? Training Reasoning Models on the Tsallis Loss Continuum
Abstract
Adapting reasoning models to new tasks during post-training with only output-level supervision stalls under reinforcement learning from verifiable rewards (RLVR) when the initial success probability $p_0$ is small. Using the Tsallis $q$-logarithm, we define a loss family $J_Q$ that interpolates between RLVR (at $q{=}0$, the exploitation pole) and the log-marginal-likelihood over latent trajectories (at $q{=}1$, the density-estimation pole). All members share the same per-example gradient directi...
Description / Details
Adapting reasoning models to new tasks during post-training with only output-level supervision stalls under reinforcement learning from verifiable rewards (RLVR) when the initial success probability is small. Using the Tsallis -logarithm, we define a loss family that interpolates between RLVR (at , the exploitation pole) and the log-marginal-likelihood over latent trajectories (at , the density-estimation pole). All members share the same per-example gradient direction, differing only by a scalar amplification that reweights each instance independently of the learning rate. This amplification is the mechanism that addresses cold-start stalling: under gradient flow, the exploitation pole requires time to escape cold start, while the density-estimation pole escapes in ; intermediate trades escape speed against noise memorization. Because is intractable, we derive two Monte Carlo estimators from the two factorizations of the gradient: Gradient-Amplified RL (GARL) samples from the prior and amplifies the RL gradient, and Posterior-Attenuated Fine-Tuning (PAFT) importance-resamples from the posterior and runs standard SFT. Both have bias ; GARL has lower variance, PAFT has semantically coherent gradients. On FinQA, HotPotQA, and MuSiQue, GARL at substantially mitigates cold-start stalling, escaping cold start where GRPO fails entirely. In warm start, GARL at low dominates FinQA where training is stable; on HotPotQA and MuSiQue, GARL destabilizes during training, and PAFT at provides stable gradients (best overall on HotPotQA at 47.9 maj@16, over GRPO).
Source: arXiv:2604.25907v1 - http://arxiv.org/abs/2604.25907v1 PDF: https://arxiv.org/pdf/2604.25907v1 Original Link: http://arxiv.org/abs/2604.25907v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 29, 2026
Artificial Intelligence
AI
0