ExplorerComputational LinguisticsNLP
Research PaperResearchia:202602.05010

Multi-Token Prediction via Self-Distillation

John Kirchenbauer

Abstract

Existing techniques for accelerating language model inference, such as speculative decoding, require training auxiliary speculator models and building and deploying complex inference pipelines. We consider a new approach for converting a pretrained autoregressive language model from a slow single next token prediction model into a fast standalone multi-token prediction model using a simple online distillation objective. The final model retains the exact same implementation as the pretrained init...

Submitted: February 5, 2026Subjects: NLP; Computational Linguistics

Description / Details

Existing techniques for accelerating language model inference, such as speculative decoding, require training auxiliary speculator models and building and deploying complex inference pipelines. We consider a new approach for converting a pretrained autoregressive language model from a slow single next token prediction model into a fast standalone multi-token prediction model using a simple online distillation objective. The final model retains the exact same implementation as the pretrained initial checkpoint and is deployable without the addition of any auxiliary verifier or other specialized inference code. On GSM8K, our method produces models that can decode more than 3×3\times faster on average at <5%<5\% drop in accuracy relative to single token decoding performance.


Source: arXiv:2602.06019v1 - http://arxiv.org/abs/2602.06019v1 PDF: https://arxiv.org/pdf/2602.06019v1 Original Article: View on arXiv

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 5, 2026
Topic:
Computational Linguistics
Area:
NLP
Comments:
0
Bookmark