ExplorerArtificial IntelligenceAI
Research PaperResearchia:202602.27001

Model Agreement via Anchoring

Eric Eaton

Abstract

Numerous lines of aim to control $\textit{model disagreement}$ -- the extent to which two machine learning models disagree in their predictions. We adopt a simple and standard notion of model disagreement in real-valued prediction problems, namely the expected squared difference in predictions between two models trained on independent samples, without any coordination of the training processes. We would like to be able to drive disagreement to zero with some natural parameter(s) of the training ...

Submitted: February 27, 2026Subjects: AI; Artificial Intelligence

Description / Details

Numerous lines of aim to control model disagreement\textit{model disagreement} -- the extent to which two machine learning models disagree in their predictions. We adopt a simple and standard notion of model disagreement in real-valued prediction problems, namely the expected squared difference in predictions between two models trained on independent samples, without any coordination of the training processes. We would like to be able to drive disagreement to zero with some natural parameter(s) of the training procedure using analyses that can be applied to existing training methodologies. We develop a simple general technique for proving bounds on independent model disagreement based on anchoring\textit{anchoring} to the average of two models within the analysis. We then apply this technique to prove disagreement bounds for four commonly used machine learning algorithms: (1) stacked aggregation over an arbitrary model class (where disagreement is driven to 0 with the number of models kk being stacked) (2) gradient boosting (where disagreement is driven to 0 with the number of iterations kk) (3) neural network training with architecture search (where disagreement is driven to 0 with the size nn of the architecture being optimized over) and (4) regression tree training over all regression trees of fixed depth (where disagreement is driven to 0 with the depth dd of the tree architecture). For clarity, we work out our initial bounds in the setting of one-dimensional regression with squared error loss -- but then show that all of our results generalize to multi-dimensional regression with any strongly convex loss.


Source: arXiv:2602.23360v1 - http://arxiv.org/abs/2602.23360v1 PDF: https://arxiv.org/pdf/2602.23360v1 Original Link: http://arxiv.org/abs/2602.23360v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 27, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark
Model Agreement via Anchoring | Researchia