A Variational Estimator for $L_p$ Calibration Errors
Abstract
Calibration$\unicode{x2014}$the problem of ensuring that predicted probabilities align with observed class frequencies$\unicode{x2014}$is a basic desideratum for reliable prediction with machine learning systems. Calibration error is traditionally assessed via a divergence function, using the expected divergence between predictions and empirical frequencies. Accurately estimating this quantity is challenging, especially in the multiclass setting. Here, we show how to extend a recent variational ...
Description / Details
Calibrationthe problem of ensuring that predicted probabilities align with observed class frequenciesis a basic desideratum for reliable prediction with machine learning systems. Calibration error is traditionally assessed via a divergence function, using the expected divergence between predictions and empirical frequencies. Accurately estimating this quantity is challenging, especially in the multiclass setting. Here, we show how to extend a recent variational framework for estimating calibration errors beyond divergences induced induced by proper losses, to cover a broad class of calibration errors induced by divergences. Our method can separate over- and under-confidence and, unlike non-variational approaches, avoids overestimation. We provide extensive experiments and integrate our code in the open-source package probmetrics (https://github.com/dholzmueller/probmetrics) for evaluating calibration errors.
Source: arXiv:2602.24230v1 - http://arxiv.org/abs/2602.24230v1 PDF: https://arxiv.org/pdf/2602.24230v1 Original Link: http://arxiv.org/abs/2602.24230v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Mar 3, 2026
Data Science
Statistics
0