Back to Explorer
Research PaperResearchia:202601.1204f432[Machine Learning > Machine Learning]

Reward-Preserving Attacks For Robust Reinforcement Learning

Lucas Schott

Abstract

Adversarial robustness in RL is difficult because perturbations affect entire trajectories: strong attacks can break learning, while weak attacks yield little robustness, and the appropriate strength varies by state. We propose αα-reward-preserving attacks, which adapt the strength of the adversary so that an αα fraction of the nominal-to-worst-case return gap remains achievable at each state. In deep RL, we use a gradient-based attack direction and learn a state-dependent magnitude ηηBη\le η_{\mathcal B} selected via a critic Qαπ((s,a),η)Q^π_α((s,a),η) trained off-policy over diverse radii. This adaptive tuning calibrates attack strength and, with intermediate αα, improves robustness across radii while preserving nominal performance, outperforming fixed- and random-radius baselines.

Submission:1/12/2026
Comments:0 comments
Subjects:Machine Learning; Machine Learning
Original Source:
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!