ExplorerComputational LinguisticsNLP
Research PaperResearchia:202604.21008

Latent Phase-Shift Rollback: Inference-Time Error Correction via Residual Stream Monitoring and KV-Cache Steering

Manan Gupta

Abstract

Large language models frequently commit unrecoverable reasoning errors mid-generation: once a wrong step is taken, subsequent tokens compound the mistake rather than correct it. We introduce $\textbf{Latent Phase-Shift Rollback}$ (LPSR): at each generation step, we monitor the residual stream at a critical layer lcrit, detect abrupt directional reversals (phase shifts) via a cosine-similarity $+$ entropy dual gate, and respond by rolling back the KV-cache and injecting a pre-computed steering ve...

Submitted: April 21, 2026Subjects: NLP; Computational Linguistics

Description / Details

Large language models frequently commit unrecoverable reasoning errors mid-generation: once a wrong step is taken, subsequent tokens compound the mistake rather than correct it. We introduce Latent Phase-Shift Rollback\textbf{Latent Phase-Shift Rollback} (LPSR): at each generation step, we monitor the residual stream at a critical layer lcrit, detect abrupt directional reversals (phase shifts) via a cosine-similarity ++ entropy dual gate, and respond by rolling back the KV-cache and injecting a pre-computed steering vector. No fine-tuning, gradient computation, or additional forward passes are required. LPSR achieves 44.0%\mathbf{44.0\%} on MATH-500 with an 8B model versus 28.8%28.8\% for standard AR (+15.2+15.2 pp; McNemar χ2=66.96χ^2 = 66.96, p<1015p < 10^{-15}). Critically, prompted self-correction, the most natural inference-time baseline, scores only 19.8%19.8\%, below standard AR; LPSR exceeds it by +24.2+24.2 pp (χ2=89.4χ^2 = 89.4, p0p \approx 0). LPSR also outperforms Best-of-16 (+7.8+7.8 pp) at 5.4×5.4\times lower token cost, and surpasses a standard 70B model (35.2%35.2\%) with 8.75×8.75\times fewer parameters at 3×{\sim}3\times the token budget. A 32-layer sweep reveals a novel \textbf{detection-correction dissociation}: error-detection AUC peaks at layer14 (0.7180.718) but task accuracy peaks at layer16 (44.0%44.0\% vs.\ 29.2%29.2\%), demonstrating that optimal monitoring depth differs for detection and correction.


Source: arXiv:2604.18567v1 - http://arxiv.org/abs/2604.18567v1 PDF: https://arxiv.org/pdf/2604.18567v1 Original Link: http://arxiv.org/abs/2604.18567v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 21, 2026
Topic:
Computational Linguistics
Area:
NLP
Comments:
0
Bookmark
Latent Phase-Shift Rollback: Inference-Time Error Correction via Residual Stream Monitoring and KV-Cache Steering | Researchia