Back to Explorer
Research PaperResearchia:202602.05010[Computational Linguistics > NLP]

Multi-Token Prediction via Self-Distillation

John Kirchenbauer

Abstract

Existing techniques for accelerating language model inference, such as speculative decoding, require training auxiliary speculator models and building and deploying complex inference pipelines. We consider a new approach for converting a pretrained autoregressive language model from a slow single next token prediction model into a fast standalone multi-token prediction model using a simple online distillation objective. The final model retains the exact same implementation as the pretrained initial checkpoint and is deployable without the addition of any auxiliary verifier or other specialized inference code. On GSM8K, our method produces models that can decode more than 3×3\times faster on average at <5%<5\% drop in accuracy relative to single token decoding performance.


Source: arXiv:2602.06019v1 - http://arxiv.org/abs/2602.06019v1 PDF: https://arxiv.org/pdf/2602.06019v1 Original Article: View on arXiv

Submission:2/5/2026
Comments:0 comments
Subjects:NLP; Computational Linguistics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Multi-Token Prediction via Self-Distillation | Researchia | Researchia