Near-Optimal Regret for KL-Regularized Multi-Armed Bandits
Abstract
Recent studies have shown that reinforcement learning with KL-regularized objectives can enjoy faster rates of convergence or logarithmic regret, in contrast to the classical -type regret in the unregularized setting. However, the statistical efficiency of online learning with respect to KL-regularized objectives remains far from completely characterized, even when specialized to multi-armed bandits (MABs). We address this problem for MABs via a sharp analysis of KL-UCB using a novel peeling argument, which yields a upper bound: the first high-probability regret bound with linear dependence on . Here, is the time horizon, is the number of arms, is the regularization intensity, and hides all logarithmic factors except those involving . The near-tightness of our analysis is certified by the first non-constant lower bound , which follows from subtle hard-instance constructions and a tailored decomposition of the Bayes prior. Moreover, in the low-regularization regime (i.e., large ), we show that the KL-regularized regret for MABs is -independent and scales as . Overall, our results provide a thorough understanding of KL-regularized MABs across all regimes of and yield nearly optimal bounds in terms of , , and .
Source: arXiv:2603.02155v1 - http://arxiv.org/abs/2603.02155v1 PDF: https://arxiv.org/pdf/2603.02155v1 Original Link: http://arxiv.org/abs/2603.02155v1