ExplorerData ScienceMachine Learning
Research PaperResearchia:202603.25052

End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions

Zakaria Mhammedi

Abstract

We study reinforcement learning (RL) with linear function approximation in Markov Decision Processes (MDPs) satisfying \emph{linear Bellman completeness} -- a fundamental setting where the Bellman backup of any linear value function remains linear. While statistically tractable, prior computationally efficient algorithms are either limited to small action spaces or require strong oracle assumptions over the feature space. We provide a computationally efficient algorithm for linear Bellman comple...

Submitted: March 25, 2026Subjects: Machine Learning; Data Science

Description / Details

We study reinforcement learning (RL) with linear function approximation in Markov Decision Processes (MDPs) satisfying \emph{linear Bellman completeness} -- a fundamental setting where the Bellman backup of any linear value function remains linear. While statistically tractable, prior computationally efficient algorithms are either limited to small action spaces or require strong oracle assumptions over the feature space. We provide a computationally efficient algorithm for linear Bellman complete MDPs with \emph{deterministic transitions}, stochastic initial states, and stochastic rewards. For finite action spaces, our algorithm is end-to-end efficient; for large or infinite action spaces, we require only a standard argmax oracle over actions. Our algorithm learns an ε\varepsilon-optimal policy with sample and computational complexity polynomial in the horizon, feature dimension, and 1/ε1/\varepsilon.


Source: arXiv:2603.23461v1 - http://arxiv.org/abs/2603.23461v1 PDF: https://arxiv.org/pdf/2603.23461v1 Original Link: http://arxiv.org/abs/2603.23461v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 25, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark
End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions | Researchia