Back to Explorer
Research PaperResearchia:202603.24033[Data Science > Statistics]

Confidence-Based Decoding is Provably Efficient for Diffusion Language Models

Changxiao Cai

Abstract

Diffusion language models (DLMs) have emerged as a promising alternative to autoregressive (AR) models for language modeling, allowing flexible generation order and parallel generation of multiple tokens. However, this flexibility introduces a challenge absent in AR models: the \emph{decoding strategy} -- which determines the order and number of tokens generated at each iteration -- critically affects sampling efficiency. Among decoding strategies explored in practice, confidence-based methods, which adaptively select which and how many tokens to unmask based on prediction confidence, have shown strong empirical performance. Despite this success, our theoretical understanding of confidence-based decoding remains limited. In this work, we develop the first theoretical analysis framework for confidence-based decoding in DLMs. We focus on an entropy sum-based strategy that continues unmasking tokens within each iteration until the cumulative entropy exceeds a threshold, and show that it achieves ε\varepsilon-accurate sampling in KL divergence with an expected number of iterations O~(H(X0)/ε)\widetilde O(H(X_0)/\varepsilon), where H(X0)H(X_0) denotes the entropy of the target data distribution. Notably, this strategy yields substantial sampling acceleration when the data distribution has low entropy relative to the sequence length, while automatically adapting to the intrinsic complexity of data without requiring prior knowledge or hyperparameter tuning. Overall, our results provide a theoretical foundation for confidence-based decoding and may inform the design of more efficient decoding strategies for DLMs.


Source: arXiv:2603.22248v1 - http://arxiv.org/abs/2603.22248v1 PDF: https://arxiv.org/pdf/2603.22248v1 Original Link: http://arxiv.org/abs/2603.22248v1

Submission:3/24/2026
Comments:0 comments
Subjects:Statistics; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!