Back to Explorer
Research PaperResearchia:202601.29187[Statistics & ML > Statistics]

Optimistic Transfer under Task Shift via Bellman Alignment

Jinhang Chai

Abstract

We study online transfer reinforcement learning (RL) in episodic Markov decision processes, where experience from related source tasks is available during learning on a target task. A fundamental difficulty is that task similarity is typically defined in terms of rewards or transitions, whereas online RL algorithms operate on Bellman regression targets. As a result, naively reusing source Bellman updates introduces systematic bias and invalidates regret guarantees. We identify one-step Bellman alignment as the correct abstraction for transfer in online RL and propose re-weighted targeting (RWT), an operator-level correction that retargets continuation values and compensates for transition mismatch via a change of measure. RWT reduces task mismatch to a fixed one-step correction and enables statistically sound reuse of source data. This alignment yields a two-stage RWT QQ-learning framework that separates variance reduction from bias correction. Under RKHS function approximation, we establish regret bounds that scale with the complexity of the task shift rather than the target MDP. Empirical results in both tabular and neural network settings demonstrate consistent improvements over single-task learning and naïve pooling, highlighting Bellman alignment as a model-agnostic transfer principle for online RL.


Source: arXiv:2601.21924v1 - http://arxiv.org/abs/2601.21924v1 PDF: https://arxiv.org/pdf/2601.21924v1 Original Link: http://arxiv.org/abs/2601.21924v1

Submission:1/29/2026
Comments:0 comments
Subjects:Statistics; Statistics & ML
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Optimistic Transfer under Task Shift via Bellman Alignment | Researchia