Back to Explorer
Research PaperResearchia:202603.25052[Data Science > Machine Learning]

End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions

Zakaria Mhammedi

Abstract

We study reinforcement learning (RL) with linear function approximation in Markov Decision Processes (MDPs) satisfying \emph{linear Bellman completeness} -- a fundamental setting where the Bellman backup of any linear value function remains linear. While statistically tractable, prior computationally efficient algorithms are either limited to small action spaces or require strong oracle assumptions over the feature space. We provide a computationally efficient algorithm for linear Bellman complete MDPs with \emph{deterministic transitions}, stochastic initial states, and stochastic rewards. For finite action spaces, our algorithm is end-to-end efficient; for large or infinite action spaces, we require only a standard argmax oracle over actions. Our algorithm learns an ε\varepsilon-optimal policy with sample and computational complexity polynomial in the horizon, feature dimension, and 1/ε1/\varepsilon.


Source: arXiv:2603.23461v1 - http://arxiv.org/abs/2603.23461v1 PDF: https://arxiv.org/pdf/2603.23461v1 Original Link: http://arxiv.org/abs/2603.23461v1

Submission:3/25/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!