A Rate-Distortion Perspective on the Emergence of Number Sense in Unsupervised Generative Models
Abstract
Number sense is a core cognitive ability supporting various adaptive behaviors and is foundational for mathematical learning. Here, we study its emergence in unsupervised generative models through the lens of rate-distortion theory (RDT), a normative framework for understanding information processing under limited resources. We train -Variational Autoencoders -- which embody key formal principles of RDT -- on synthetic images containing varying numbers of items, as commonly used in numerosity perception research. We systematically vary the encoding capacity and assess the models' sensitivity to numerosity and the robustness of the emergent numerical representations through a comprehensive set of analyses, including numerosity estimation and discrimination tasks, latent-space analysis, generative capabilities and generalization to novel stimuli. In line with RDT, we find that behavioral performance in numerosity perception and the ability to extract numerosity unconfounded by non-numerical visual features scale with encoding capacity according to a power law. At high capacity, the unsupervised model develops a robust neural code for numerical information, with performance closely approximating a supervised model explicitly trained for visual enumeration. It exhibits strong generative abilities and generalizes well to novel images, whereas at low capacity, the model shows marked deficits in numerosity perception and representation. Finally, comparison with human data shows that models trained at intermediate capacity levels span the full range of human behavioral performance while still developing a robust emergent numerical code. In sum, our results show that unsupervised generative models can develop a number sense and demonstrate that rate-distortion theory provides a powerful information-theoretic framework for understanding how capacity constraints shape numerosity perception.