ExplorerArtificial IntelligenceAI
Research PaperResearchia:202604.30049

Language Diffusion Models are Associative Memories Capable of Retrieving Unseen Data

Bao Pham

Abstract

When do language diffusion models memorize their training data, and how to quantitatively assess their true generative regime? We address these questions by showing that Uniform-based Discrete Diffusion Models (UDDMs) fundamentally behave as Associative Memories (AMs) $\textit{with emergent creative capabilities}$. The core idea of an AM is to reliably recover stored data points as $\textit{memories}$ by establishing distinct basins of attraction around them. Historically, models like Hopfield n...

Submitted: April 30, 2026Subjects: AI; Artificial Intelligence

Description / Details

When do language diffusion models memorize their training data, and how to quantitatively assess their true generative regime? We address these questions by showing that Uniform-based Discrete Diffusion Models (UDDMs) fundamentally behave as Associative Memories (AMs) with emergent creative capabilities\textit{with emergent creative capabilities}. The core idea of an AM is to reliably recover stored data points as memories\textit{memories} by establishing distinct basins of attraction around them. Historically, models like Hopfield networks use an explicit energy function to guarantee these stable attractors. We broaden this perspective by leveraging the observation that energy is not strictly necessary, as basins of attraction can also be formed via conditional likelihood maximization. By evaluating token recovery of training\textit{training} and test\textit{test} examples, we identify in UDDMs a sharp memorization-to-generalization transition governed by the size of the training dataset: as it increases, basins around training examples shrink and basins around unseen test examples expand, until both later converge to the same level. Crucially, we can detect this transition using only the conditional entropy of predicted token sequences: memorization is characterized by vanishing conditional entropy, while in the generalization regime the conditional entropy of most tokens remains finite. Thus, conditional entropy offers a practical probe for the memorization-to-generalization transition in deployed models.


Source: arXiv:2604.26841v1 - http://arxiv.org/abs/2604.26841v1 PDF: https://arxiv.org/pdf/2604.26841v1 Original Link: http://arxiv.org/abs/2604.26841v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 30, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark
Language Diffusion Models are Associative Memories Capable of Retrieving Unseen Data | Researchia