Latent Phase-Shift Rollback: Inference-Time Error Correction via Residual Stream Monitoring and KV-Cache Steering
Abstract
Large language models frequently commit unrecoverable reasoning errors mid-generation: once a wrong step is taken, subsequent tokens compound the mistake rather than correct it. We introduce $\textbf{Latent Phase-Shift Rollback}$ (LPSR): at each generation step, we monitor the residual stream at a critical layer lcrit, detect abrupt directional reversals (phase shifts) via a cosine-similarity $+$ entropy dual gate, and respond by rolling back the KV-cache and injecting a pre-computed steering ve...
Description / Details
Large language models frequently commit unrecoverable reasoning errors mid-generation: once a wrong step is taken, subsequent tokens compound the mistake rather than correct it. We introduce (LPSR): at each generation step, we monitor the residual stream at a critical layer lcrit, detect abrupt directional reversals (phase shifts) via a cosine-similarity entropy dual gate, and respond by rolling back the KV-cache and injecting a pre-computed steering vector. No fine-tuning, gradient computation, or additional forward passes are required. LPSR achieves on MATH-500 with an 8B model versus for standard AR ( pp; McNemar , ). Critically, prompted self-correction, the most natural inference-time baseline, scores only , below standard AR; LPSR exceeds it by pp (, ). LPSR also outperforms Best-of-16 ( pp) at lower token cost, and surpasses a standard 70B model () with fewer parameters at the token budget. A 32-layer sweep reveals a novel \textbf{detection-correction dissociation}: error-detection AUC peaks at layer14 () but task accuracy peaks at layer16 ( vs.\ ), demonstrating that optimal monitoring depth differs for detection and correction.
Source: arXiv:2604.18567v1 - http://arxiv.org/abs/2604.18567v1 PDF: https://arxiv.org/pdf/2604.18567v1 Original Link: http://arxiv.org/abs/2604.18567v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 21, 2026
Computational Linguistics
NLP
0