Back to Explorer
Research PaperResearchia:202603.16024[Mathematics > Mathematics]

Reinforcement Learning for Discounted and Ergodic Control of Diffusion Processes

Erhan Bayraktar

Abstract

This paper develops a quantized Q-learning algorithm for the optimal control of controlled diffusion processes on Rd\mathbb{R}^d under both discounted and ergodic (average) cost criteria. We first establish near-optimality of finite-state MDP approximations to discrete-time discretizations of the diffusion, then introduce a quantized Q-learning scheme and prove its almost-sure convergence to near-optimal policies for the finite MDP. These policies, when interpolated to continuous time, are shown to be near-optimal for the original diffusion model under discounted costs and -- via a vanishing-discount argument -- also under ergodic costs for sufficiently small discount factors. The analysis applies under mild conditions (Lipschitz dynamics, non-degeneracy, bounded continuous costs, and Lyapunov stability for ergodic case) without requiring prior knowledge of the system dynamics or restrictions on control policies (beyond admissibility). Our results complement recent work on continuous-time reinforcement learning for diffusions by providing explicit near-optimality rates and extending rigorous guarantees both for discounted cost and ergodic cost criteria for diffusions with unbounded state space.


Source: arXiv:2603.13155v1 - http://arxiv.org/abs/2603.13155v1 PDF: https://arxiv.org/pdf/2603.13155v1 Original Link: http://arxiv.org/abs/2603.13155v1

Submission:3/16/2026
Comments:0 comments
Subjects:Mathematics; Mathematics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!