Back to Explorer
Research PaperResearchia:202603.05026[Mathematics > Mathematics]

A Covering Framework for Offline POMDPs Learning using Belief Space Metric

Youheng Zhu

Abstract

In off policy evaluation (OPE) for partially observable Markov decision processes (POMDPs), an agent must infer hidden states from past observations, which exacerbates both the curse of horizon and the curse of memory in existing OPE methods. This paper introduces a novel covering analysis framework that exploits the intrinsic metric structure of the belief space (distributions over latent states) to relax traditional coverage assumptions. By assuming value relevant functions are Lipschitz continuous in the belief space, we derive error bounds that mitigate exponential blow ups in horizon and memory length. Our unified analysis technique applies to a broad class of OPE algorithms, yielding concrete error bounds and coverage requirements expressed in terms of belief space metrics rather than raw history coverage. We illustrate the improved sample efficiency of this framework via case studies: the double sampling Bellman error minimization algorithm, and the memory based future dependent value functions (FDVF). In both cases, our coverage definition based on the belief space metric yields tighter bounds.


Source: arXiv:2603.03191v1 - http://arxiv.org/abs/2603.03191v1 PDF: https://arxiv.org/pdf/2603.03191v1 Original Link: http://arxiv.org/abs/2603.03191v1

Submission:3/5/2026
Comments:0 comments
Subjects:Mathematics; Mathematics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

A Covering Framework for Offline POMDPs Learning using Belief Space Metric | Researchia