Back to Explorer
Research PaperResearchia:202602.13047[Data Science > Machine Learning]

Just on Time: Token-Level Early Stopping for Diffusion Language Models

Zahar Kohut

Abstract

Diffusion language models generate text through iterative refinement, a process that is often computationally inefficient because many tokens reach stability long before the final denoising step. We introduce a training-free, token-level early stopping approach that identifies convergence independently at each position. Our method leverages lightweight signals derived from the model's predictions and local context to dynamically determine when individual tokens can be finalized. This yields adaptive per-token freezing without task-specific fine-tuning, substantially reducing the total number of diffusion steps required. Across diverse benchmarks, spanning mathematical reasoning, general question answering, and scientific understanding, our approach achieves state-of-the-art efficiency gains while preserving generation quality.


Source: arXiv:2602.11133v1 - http://arxiv.org/abs/2602.11133v1 PDF: https://arxiv.org/pdf/2602.11133v1 Original Link: http://arxiv.org/abs/2602.11133v1

Submission:2/13/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!