Back to Explorer
Research PaperResearchia:202602.20001[Data Science > Machine Learning]

Sink-Aware Pruning for Diffusion Language Models

Aidar Myrzakhan

Abstract

Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose Sink-Aware Pruning{\bf \texttt{Sink-Aware Pruning}}, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.


Source: arXiv:2602.17664v1 - http://arxiv.org/abs/2602.17664v1 PDF: https://arxiv.org/pdf/2602.17664v1 Original Link: http://arxiv.org/abs/2602.17664v1

Submission:2/20/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!