Back to Explorer
Research PaperResearchia:202603.11068[Artificial Intelligence > AI]

Towards Batch-to-Streaming Deep Reinforcement Learning for Continuous Control

Riccardo De Monte

Abstract

State-of-the-art deep reinforcement learning (RL) methods have achieved remarkable performance in continuous control tasks, yet their computational complexity is often incompatible with the constraints of resource-limited hardware, due to their reliance on replay buffers, batch updates, and target networks. The emerging paradigm of streaming deep RL addresses this limitation through purely online updates, achieving strong empirical performance on standard benchmarks. In this work, we propose two novel streaming deep RL algorithms, Streaming Soft Actor-Critic (S2AC) and Streaming Deterministic Actor-Critic (SDAC), explicitly designed to be compatible with state-of-the-art batch RL methods, making them particularly suitable for on-device finetuning applications such as Sim2Real transfer. Both algorithms achieve performance comparable to state-of-the-art streaming baselines on standard benchmarks without requiring tedious hyperparameter tuning. Finally, we further investigate the practical challenges of transitioning from batch to streaming learning during finetuning and propose concrete strategies to tackle them.


Source: arXiv:2603.08588v1 - http://arxiv.org/abs/2603.08588v1 PDF: https://arxiv.org/pdf/2603.08588v1 Original Link: http://arxiv.org/abs/2603.08588v1

Submission:3/11/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Towards Batch-to-Streaming Deep Reinforcement Learning for Continuous Control | Researchia