ExplorerData ScienceMachine Learning
Research PaperResearchia:202603.19054

Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs

Abhishek Gupta

Abstract

Markov decision processes (MDPs) is viewed as an optimization of an objective function over certain linear operators over general function spaces. Using the well-established perturbation theory of linear operators, this viewpoint allows one to identify derivatives of the objective function as a function of the linear operators. This leads to generalization of many well-known results in reinforcement learning to cases with generate state and action spaces. Prior results of this type were only est...

Submitted: March 19, 2026Subjects: Machine Learning; Data Science

Description / Details

Markov decision processes (MDPs) is viewed as an optimization of an objective function over certain linear operators over general function spaces. Using the well-established perturbation theory of linear operators, this viewpoint allows one to identify derivatives of the objective function as a function of the linear operators. This leads to generalization of many well-known results in reinforcement learning to cases with generate state and action spaces. Prior results of this type were only established in the finite-state finite-action MDP settings and in settings with certain linear function approximations. The framework also leads to new low-complexity PPO-type reinforcement learning algorithms for general state and action space MDPs.


Source: arXiv:2603.17875v1 - http://arxiv.org/abs/2603.17875v1 PDF: https://arxiv.org/pdf/2603.17875v1 Original Link: http://arxiv.org/abs/2603.17875v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 19, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark
Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs | Researchia