ExplorerData ScienceMachine Learning
Research PaperResearchia:202602.25044

Adaptation to Intrinsic Dependence in Diffusion Language Models

Yunxiao Zhao

Abstract

Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) approaches, enabling parallel token generation beyond a rigid left-to-right order. Despite growing empirical success, the theoretical understanding of how unmasking schedules -- which specify the order and size of unmasked tokens during sampling -- affect generation quality remains limited. In this work, we introduce a distribution-agnostic unmasking schedule for DLMs that adapts to the (unkn...

Submitted: February 25, 2026Subjects: Machine Learning; Data Science

Description / Details

Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) approaches, enabling parallel token generation beyond a rigid left-to-right order. Despite growing empirical success, the theoretical understanding of how unmasking schedules -- which specify the order and size of unmasked tokens during sampling -- affect generation quality remains limited. In this work, we introduce a distribution-agnostic unmasking schedule for DLMs that adapts to the (unknown) dependence structure of the target data distribution, without requiring any prior knowledge or hyperparameter tuning. In contrast to prior deterministic procedures that fix unmasking sizes, our method randomizes the number of tokens revealed at each iteration. We show that, for two specific parameter choices, the sampling convergence guarantees -- measured by Kullback-Leibler (KL) divergence -- scale as O~(TC/K)\widetilde O(\mathsf{TC}/K) and O~(DTC/K)\widetilde O(\mathsf{DTC}/K) respectively. Here, KK is the number of iterations, and TC\mathsf{TC} and DTC\mathsf{DTC} are the total correlation and dual total correlation of the target distribution, capturing the intrinsic dependence structure underlying the data. Importantly, our guarantees hold in the practically relevant parallel-sampling regime K<LK<L where LL is the token sequence length. These results significantly improve upon prior convergence theories and yield substantial sampling acceleration for low-complexity distributions. Overall, our findings unveil the adaptivity of DLMs to intrinsic data structures and shed light on the benefit of randomized unmasking sizes in inference schedule design.


Source: arXiv:2602.20126v1 - http://arxiv.org/abs/2602.20126v1 PDF: https://arxiv.org/pdf/2602.20126v1 Original Link: http://arxiv.org/abs/2602.20126v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 25, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark