Back to Explorer
Research PaperResearchia:202603.12081[Robotics > Robotics]

Lifelong Imitation Learning with Multimodal Latent Replay and Incremental Adjustment

Fanqi Yu

Abstract

We introduce a lifelong imitation learning framework that enables continual policy refinement across sequential tasks under realistic memory and data constraints. Our approach departs from conventional experience replay by operating entirely in a multimodal latent space, where compact representations of visual, linguistic, and robot's state information are stored and reused to support future learning. To further stabilize adaptation, we introduce an incremental feature adjustment mechanism that regularizes the evolution of task embeddings through an angular margin constraint, preserving inter-task distinctiveness. Our method establishes a new state of the art in the LIBERO benchmarks, achieving 10-17 point gains in AUC and up to 65% less forgetting compared to previous leading methods. Ablation studies confirm the effectiveness of each component, showing consistent gains over alternative strategies. The code is available at: https://github.com/yfqi/lifelong_mlr_ifa.


Source: arXiv:2603.10929v1 - http://arxiv.org/abs/2603.10929v1 PDF: https://arxiv.org/pdf/2603.10929v1 Original Link: http://arxiv.org/abs/2603.10929v1

Submission:3/12/2026
Comments:0 comments
Subjects:Robotics; Robotics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!