A Complete Symmetry Classification of Shallow ReLU Networks
Abstract
Parameter space is not function space for neural network architectures. This fact, investigated as early as the 1990s under terms such as reverse engineering," or parameter identifiability", has led to the natural question of parameter space symmetries\textemdash the study of distinct parameters in neural architectures which realize the same function. Indeed, the quotient space obtained by identifying parameters giving rise to the same function, called the \textit{neuromanifold}, has been shown ...
Description / Details
Parameter space is not function space for neural network architectures. This fact, investigated as early as the 1990s under terms such as reverse engineering," or parameter identifiability", has led to the natural question of parameter space symmetries\textemdash the study of distinct parameters in neural architectures which realize the same function. Indeed, the quotient space obtained by identifying parameters giving rise to the same function, called the \textit{neuromanifold}, has been shown in some cases to have rich geometric properties, impacting optimization dynamics. Thus far, techniques towards complete classifications have required the analyticity of the activation function, notably excising the important case of ReLU. Here, in contrast, we exploit the non-differentiability of the ReLU activation to provide a complete classification of the symmetries in the shallow case.
Source: arXiv:2604.14037v1 - http://arxiv.org/abs/2604.14037v1 PDF: https://arxiv.org/pdf/2604.14037v1 Original Link: http://arxiv.org/abs/2604.14037v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 17, 2026
Data Science
Machine Learning
0