Back to Explorer
Research PaperResearchia:202603.19054[Data Science > Machine Learning]

Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs

Abhishek Gupta

Abstract

Markov decision processes (MDPs) is viewed as an optimization of an objective function over certain linear operators over general function spaces. Using the well-established perturbation theory of linear operators, this viewpoint allows one to identify derivatives of the objective function as a function of the linear operators. This leads to generalization of many well-known results in reinforcement learning to cases with generate state and action spaces. Prior results of this type were only established in the finite-state finite-action MDP settings and in settings with certain linear function approximations. The framework also leads to new low-complexity PPO-type reinforcement learning algorithms for general state and action space MDPs.


Source: arXiv:2603.17875v1 - http://arxiv.org/abs/2603.17875v1 PDF: https://arxiv.org/pdf/2603.17875v1 Original Link: http://arxiv.org/abs/2603.17875v1

Submission:3/19/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs | Researchia