ExplorerData ScienceStatistics
Research PaperResearchia:202603.27032

The Rules-and-Facts Model for Simultaneous Generalization and Memorization in Neural Networks

Gabriele Farné

Abstract

A key capability of modern neural networks is their capacity to simultaneously learn underlying rules and memorize specific facts or exceptions. Yet, theoretical understanding of this dual capability remains limited. We introduce the Rules-and-Facts (RAF) model, a minimal solvable setting that enables precise characterization of this phenomenon by bridging two classical lines of work in the statistical physics of learning: the teacher-student framework for generalization and Gardner-style capaci...

Submitted: March 27, 2026Subjects: Statistics; Data Science

Description / Details

A key capability of modern neural networks is their capacity to simultaneously learn underlying rules and memorize specific facts or exceptions. Yet, theoretical understanding of this dual capability remains limited. We introduce the Rules-and-Facts (RAF) model, a minimal solvable setting that enables precise characterization of this phenomenon by bridging two classical lines of work in the statistical physics of learning: the teacher-student framework for generalization and Gardner-style capacity analysis for memorization. In the RAF model, a fraction 1ε1 - \varepsilon of training labels is generated by a structured teacher rule, while a fraction ε\varepsilon consists of unstructured facts with random labels. We characterize when the learner can simultaneously recover the underlying rule - allowing generalization to new data - and memorize the unstructured examples. Our results quantify how overparameterization enables the simultaneous realization of these two objectives: sufficient excess capacity supports memorization, while regularization and the choice of kernel or nonlinearity control the allocation of capacity between rule learning and memorization. The RAF model provides a theoretical foundation for understanding how modern neural networks can infer structure while storing rare or non-compressible information.


Source: arXiv:2603.25579v1 - http://arxiv.org/abs/2603.25579v1 PDF: https://arxiv.org/pdf/2603.25579v1 Original Link: http://arxiv.org/abs/2603.25579v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 27, 2026
Topic:
Data Science
Area:
Statistics
Comments:
0
Bookmark