Back to Explorer
Research PaperResearchia:202602.17041[Data Science > Machine Learning]

Random Forests as Statistical Procedures: Design, Variance, and Dependence

Nathaniel S. O'Connell

Abstract

Random forests are widely used prediction procedures, yet are typically described algorithmically rather than as statistical designs acting on a fixed dataset. We develop a finite-sample, design-based formulation of random forests in which each tree is an explicit randomized conditional regression function. This perspective yields an exact variance identity for the forest predictor that separates finite-aggregation variability from a structural dependence term that persists even under infinite aggregation. We further decompose both single-tree dispersion and inter-tree covariance using the laws of total variance and covariance, isolating two fundamental design mechanisms-reuse of training observations and alignment of data-adaptive partitions. These mechanisms induce a strict covariance floor, demonstrating that predictive variability cannot be eliminated by increasing the number of trees alone. The resulting framework clarifies how resampling, feature-level randomization, and split selection govern resolution, tree variability, and dependence, and establishes random forests as explicit finite-sample statistical designs whose behavior is determined by their underlying randomized construction.


Source: arXiv:2602.13104v1 - http://arxiv.org/abs/2602.13104v1 PDF: https://arxiv.org/pdf/2602.13104v1 Original Link: http://arxiv.org/abs/2602.13104v1

Submission:2/17/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Random Forests as Statistical Procedures: Design, Variance, and Dependence | Researchia | Researchia