Back to Explorer
Research PaperResearchia:202601.08eec861[Robotics > Robotics]

When to Act: Calibrated Confidence for Reliable Human Intention Prediction in Assistive Robotics

Johannes A. Gaus

Abstract

Assistive devices must determine both what a user intends to do and how reliable that prediction is before providing support. We introduce a safety-critical triggering framework based on calibrated probabilities for multimodal next-action prediction in Activities of Daily Living. Raw model confidence often fails to reflect true correctness, posing a safety risk. Post-hoc calibration aligns predicted confidence with empirical reliability and reduces miscalibration by about an order of magnitude without affecting accuracy. The calibrated confidence drives a simple ACT/HOLD rule that acts only when reliability is high and withholds assistance otherwise. This turns the confidence threshold into a quantitative safety parameter for assisted actions and enables verifiable behavior in an assistive control loop.

Submission:1/8/2026
Comments:0 comments
Subjects:Robotics; Robotics
Original Source:
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

When to Act: Calibrated Confidence for Reliable Human Intention Prediction in Assistive Robotics | Researchia