Back to Explorer
Research PaperResearchia:202601.21017[Neuroscience > Neuroscience]

Power-Law Scaling in the Classification Performance of Small-Scale Spiking Neural Networks

Zhengdi Zhang

Abstract

This paper investigates the classification capability of small-scale spiking neural networks based on the Leaky Integrate-and-Fire (LIF) neuron model. We analyze the relationship between classification accuracy and three factors: the number of neurons, the number of stimulus nodes, and the number of classification categories. Notably, we employ a large language model (LLM) to assist in discovering the underlying functional relationships among these variables, and compare its performance against traditional methods such as linear and polynomial fitting. Experimental results show that classification accuracy follows a power-law scaling primarily with the number of categories, while the effects of neuron count and stimulus nodes are relatively minor. A key advantage of the LLM-based approach is its ability to propose plausible functional forms beyond pre-defined equation templates, often leading to more concise or accurate mathematical descriptions of the observed scaling laws. This finding has important implications for understanding efficient computation in biological neural systems and for pioneering new paradigms in AI-aided scientific discovery.


Source: arXiv:2601.14961v1 - http://arxiv.org/abs/2601.14961v1 PDF: https://arxiv.org/pdf/2601.14961v1 Original Link: http://arxiv.org/abs/2601.14961v1

Submission:1/21/2026
Comments:0 comments
Subjects:Neuroscience; Neuroscience
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Power-Law Scaling in the Classification Performance of Small-Scale Spiking Neural Networks | Researchia