Back to Explorer
Research PaperResearchia:202603.20044[Artificial Intelligence > AI]

OS-Themis: A Scalable Critic Framework for Generalist GUI Rewards

Zehao Li

Abstract

Reinforcement Learning (RL) has the potential to improve the robustness of GUI agents in stochastic environments, yet training is highly sensitive to the quality of the reward function. Existing reward approaches struggle to achieve both scalability and performance. To address this, we propose OS-Themis, a scalable and accurate multi-agent critic framework. Unlike a single judge, OS-Themis decomposes trajectories into verifiable milestones to isolate critical evidence for decision making and employs a review mechanism to strictly audit the evidence chain before making the final verdict. To facilitate evaluation, we further introduce OmniGUIRewardBench (OGRBench), a holistic cross-platform benchmark for GUI outcome rewards, where all evaluated models achieve their best performance under OS-Themis. Extensive experiments on AndroidWorld show that OS-Themis yields a 10.3% improvement when used to support online RL training, and a 6.9% gain when used for trajectory validation and filtering in the self-training loop, highlighting its potential to drive agent evolution.


Source: arXiv:2603.19191v1 - http://arxiv.org/abs/2603.19191v1 PDF: https://arxiv.org/pdf/2603.19191v1 Original Link: http://arxiv.org/abs/2603.19191v1

Submission:3/20/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

OS-Themis: A Scalable Critic Framework for Generalist GUI Rewards | Researchia