Back to Explorer
Research PaperResearchia:202603.04054[Artificial Intelligence > AI]

Near-Optimal Regret for KL-Regularized Multi-Armed Bandits

Kaixuan Ji

Abstract

Recent studies have shown that reinforcement learning with KL-regularized objectives can enjoy faster rates of convergence or logarithmic regret, in contrast to the classical T\sqrt{T}-type regret in the unregularized setting. However, the statistical efficiency of online learning with respect to KL-regularized objectives remains far from completely characterized, even when specialized to multi-armed bandits (MABs). We address this problem for MABs via a sharp analysis of KL-UCB using a novel peeling argument, which yields a O~(ηKlog2T)\tilde{O}(ηK\log^2T) upper bound: the first high-probability regret bound with linear dependence on KK. Here, TT is the time horizon, KK is the number of arms, η1η^{-1} is the regularization intensity, and O~\tilde{O} hides all logarithmic factors except those involving logT\log T. The near-tightness of our analysis is certified by the first non-constant lower bound Ω(ηKlogT)Ω(ηK \log T), which follows from subtle hard-instance constructions and a tailored decomposition of the Bayes prior. Moreover, in the low-regularization regime (i.e., large ηη), we show that the KL-regularized regret for MABs is ηη-independent and scales as Θ~(KT)\tildeΘ(\sqrt{KT}). Overall, our results provide a thorough understanding of KL-regularized MABs across all regimes of ηη and yield nearly optimal bounds in terms of KK, ηη, and TT.


Source: arXiv:2603.02155v1 - http://arxiv.org/abs/2603.02155v1 PDF: https://arxiv.org/pdf/2603.02155v1 Original Link: http://arxiv.org/abs/2603.02155v1

Submission:3/4/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!