Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs
Abstract
Markov decision processes (MDPs) is viewed as an optimization of an objective function over certain linear operators over general function spaces. Using the well-established perturbation theory of linear operators, this viewpoint allows one to identify derivatives of the objective function as a function of the linear operators. This leads to generalization of many well-known results in reinforcement learning to cases with generate state and action spaces. Prior results of this type were only established in the finite-state finite-action MDP settings and in settings with certain linear function approximations. The framework also leads to new low-complexity PPO-type reinforcement learning algorithms for general state and action space MDPs.
Source: arXiv:2603.17875v1 - http://arxiv.org/abs/2603.17875v1 PDF: https://arxiv.org/pdf/2603.17875v1 Original Link: http://arxiv.org/abs/2603.17875v1