ExplorerData ScienceMachine Learning
Research PaperResearchia:202605.07017

Estimating the expected output of wide random MLPs more efficiently than sampling

Wilson Wu

Abstract

By far the most common way to estimate an expected loss in machine learning is to draw samples, compute the loss on each one, and take the empirical average. However, sampling is not necessarily optimal. Given an MLP at initialization, we show how to estimate its expected output over Gaussian inputs without running samples through the network at all. Instead, we produce approximate representations of the distributions of activations at each layer, leveraging tools such as cumulants and Hermite e...

Submitted: May 7, 2026Subjects: Machine Learning; Data Science

Description / Details

By far the most common way to estimate an expected loss in machine learning is to draw samples, compute the loss on each one, and take the empirical average. However, sampling is not necessarily optimal. Given an MLP at initialization, we show how to estimate its expected output over Gaussian inputs without running samples through the network at all. Instead, we produce approximate representations of the distributions of activations at each layer, leveraging tools such as cumulants and Hermite expansions. We show both theoretically and empirically that for sufficiently wide networks, our estimator achieves a target mean squared error using substantially fewer FLOPs than Monte Carlo sampling. We find moreover that our methods perform particularly well at estimating the probabilities of rare events, and additionally demonstrate how they can be used for model training. Together, these findings suggest a path to producing models with a greatly reduced probability of catastrophic tail risks.


Source: arXiv:2605.05179v1 - http://arxiv.org/abs/2605.05179v1 PDF: https://arxiv.org/pdf/2605.05179v1 Original Link: http://arxiv.org/abs/2605.05179v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
May 7, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark