Back to Explorer
Research PaperResearchia:202601.29040[Machine Learning > Machine Learning]

Boosting CVaR Policy Optimization with Quantile Gradients

Yudong Luo

Abstract

Optimizing Conditional Value-at-risk (CVaR) using policy gradient (a.k.a CVaR-PG) faces significant challenges of sample inefficiency. This inefficiency stems from the fact that it focuses on tail-end performance and overlooks many sampled trajectories. We address this problem by augmenting CVaR with an expected quantile term. Quantile optimization admits a dynamic programming formulation that leverages all sampled data, thus improves sample efficiency. This does not alter the CVaR objective since CVaR corresponds to the expectation of quantile over the tail. Empirical results in domains with verifiable risk-averse behavior show that our algorithm within the Markovian policy class substantially improves upon CVaR-PG and consistently outperforms other existing methods.


Source: arXiv:2601.22100v1 - http://arxiv.org/abs/2601.22100v1 PDF: https://arxiv.org/pdf/2601.22100v1 Original Link: http://arxiv.org/abs/2601.22100v1

Submission:1/29/2026
Comments:0 comments
Subjects:Machine Learning; Machine Learning
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Boosting CVaR Policy Optimization with Quantile Gradients | Researchia