When Your Model Stops Working: Anytime-Valid Calibration Monitoring
Abstract
Practitioners monitoring deployed probabilistic models face a fundamental trap: any fixed-sample test applied repeatedly over an unbounded stream will eventually raise a false alarm, even when the model remains perfectly stable. Existing methods typically lack formal error guarantees, conflate alarm time with changepoint location, and monitor indirect signals that do not fully characterize calibration. We present PITMonitor, an anytime-valid calibration-specific monitor that detects distribution...
Description / Details
Practitioners monitoring deployed probabilistic models face a fundamental trap: any fixed-sample test applied repeatedly over an unbounded stream will eventually raise a false alarm, even when the model remains perfectly stable. Existing methods typically lack formal error guarantees, conflate alarm time with changepoint location, and monitor indirect signals that do not fully characterize calibration. We present PITMonitor, an anytime-valid calibration-specific monitor that detects distributional shifts in probability integral transforms via a mixture e-process, providing Type I error control over an unbounded monitoring horizon as well as Bayesian changepoint estimation. On river's FriedmanDrift benchmark, PITMonitor achieves detection rates competitive with the strongest baselines across all three scenarios, although detection delay is substantially longer under local drift.
Source: arXiv:2603.13156v1 - http://arxiv.org/abs/2603.13156v1 PDF: https://arxiv.org/pdf/2603.13156v1 Original Link: http://arxiv.org/abs/2603.13156v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Mar 16, 2026
Data Science
Statistics
0