Random Cloud: Finding Minimal Neural Architectures Without Training
Abstract
I propose the \emph{Random Cloud} method, a training-free approach to neural architecture search that discovers minimal feedforward network topologies through stochastic exploration and progressive structural reduction. Unlike post-training pruning methods that require a full train-prune-retrain cycle, this method evaluates randomly initialized networks without backpropagation, progressively reduces their topology, and only trains the best minimal candidate at the end. I evaluate on 7 classifica...
Description / Details
I propose the \emph{Random Cloud} method, a training-free approach to neural architecture search that discovers minimal feedforward network topologies through stochastic exploration and progressive structural reduction. Unlike post-training pruning methods that require a full train-prune-retrain cycle, this method evaluates randomly initialized networks without backpropagation, progressively reduces their topology, and only trains the best minimal candidate at the end. I evaluate on 7 classification benchmarks against magnitude pruning and random pruning baselines. The Random Cloud matches or outperforms both baselines in 6 of 7 datasets, achieving statistically significant improvements on Sonar (pp accuracy, vs magnitude pruning) with 87% parameter reduction. Crucially, the method is faster than both pruning baselines in 4 of 5 datasets (0.67--0.94 the cost of full training), since it avoids training the full-size network entirely.
Source: arXiv:2604.26830v1 - http://arxiv.org/abs/2604.26830v1 PDF: https://arxiv.org/pdf/2604.26830v1 Original Link: http://arxiv.org/abs/2604.26830v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 30, 2026
Artificial Intelligence
AI
0