Linear Readout of Neural Manifolds with Continuous Variables
Abstract
Brains and artificial neural networks compute with continuous variables such as object position or stimulus orientation. However, the complex variability in neural responses makes it difficult to link internal representational structure to task performance. We develop a statistical-mechanical theory of regression capacity that relates linear decoding efficiency of continuous variables to geometric properties of neural manifolds. Our theory handles complex neural variability and applies to real d...
Description / Details
Brains and artificial neural networks compute with continuous variables such as object position or stimulus orientation. However, the complex variability in neural responses makes it difficult to link internal representational structure to task performance. We develop a statistical-mechanical theory of regression capacity that relates linear decoding efficiency of continuous variables to geometric properties of neural manifolds. Our theory handles complex neural variability and applies to real data, revealing increasing capacity for decoding object position and size along the monkey visual stream.
Source: arXiv:2603.10956v1 - http://arxiv.org/abs/2603.10956v1 PDF: https://arxiv.org/pdf/2603.10956v1 Original Link: http://arxiv.org/abs/2603.10956v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Mar 12, 2026
Neuroscience
Neuroscience
0