Reinforcement Learning for Discounted and Ergodic Control of Diffusion Processes
Abstract
This paper develops a quantized Q-learning algorithm for the optimal control of controlled diffusion processes on under both discounted and ergodic (average) cost criteria. We first establish near-optimality of finite-state MDP approximations to discrete-time discretizations of the diffusion, then introduce a quantized Q-learning scheme and prove its almost-sure convergence to near-optimal policies for the finite MDP. These policies, when interpolated to continuous time, are shown to be near-optimal for the original diffusion model under discounted costs and -- via a vanishing-discount argument -- also under ergodic costs for sufficiently small discount factors. The analysis applies under mild conditions (Lipschitz dynamics, non-degeneracy, bounded continuous costs, and Lyapunov stability for ergodic case) without requiring prior knowledge of the system dynamics or restrictions on control policies (beyond admissibility). Our results complement recent work on continuous-time reinforcement learning for diffusions by providing explicit near-optimality rates and extending rigorous guarantees both for discounted cost and ergodic cost criteria for diffusions with unbounded state space.
Source: arXiv:2603.13155v1 - http://arxiv.org/abs/2603.13155v1 PDF: https://arxiv.org/pdf/2603.13155v1 Original Link: http://arxiv.org/abs/2603.13155v1