Gaussian Approximation for Asynchronous Q-learning
Abstract
In this paper, we derive rates of convergence in the high-dimensional central limit theorem for Polyak-Ruppert averaged iterates generated by the asynchronous Q-learning algorithm with a polynomial stepsize . Assuming that the sequence of state-action-next-state triples forms a uniformly geometrically ergodic Markov chain, we establish a rate of order up to over the class of hyper-rectangles, where is the number of samples used by the algorithm and and denote the numbers of states and actions, respectively. To obtain this result, we prove a high-dimensional central limit theorem for sums of martingale differences, which may be of independent interest. Finally, we present bounds for high-order moments for the algorithm's last iterate.
Source: arXiv:2604.07323v1 - http://arxiv.org/abs/2604.07323v1 PDF: https://arxiv.org/pdf/2604.07323v1 Original Link: http://arxiv.org/abs/2604.07323v1