Back to Explorer
Research PaperResearchia:202603.17084[Robotics > Robotics]

Zero-Shot Generalization from Motion Demonstrations to New Tasks

Kilian Freitag

Abstract

Learning motion policies from expert demonstrations is an essential paradigm in modern robotics. While end-to-end models aim for broad generalization, they require large datasets and computationally heavy inference. Conversely, learning dynamical systems (DS) provides fast, reactive, and provably stable control from very few demonstrations. However, existing DS learning methods typically model isolated tasks and struggle to reuse demonstrations for novel behaviors. In this work, we formalize the problem of combining isolated demonstrations within a shared workspace to enable generalization to unseen tasks. The Gaussian Graph is introduced, which reinterprets spatial components of learned motion primitives as discrete vertices with connections to one another. This formulation allows us to bridge continuous control with discrete graph search. We propose two frameworks leveraging this graph: Stitching, for constructing time-invariant DSs, and Chaining, giving a sequence-based DS for complex motions while retaining convergence guarantees. Simulations and real-robot experiments show that these methods successfully generalize to new tasks where baseline methods fail.


Source: arXiv:2603.15445v1 - http://arxiv.org/abs/2603.15445v1 PDF: https://arxiv.org/pdf/2603.15445v1 Original Link: http://arxiv.org/abs/2603.15445v1

Submission:3/17/2026
Comments:0 comments
Subjects:Robotics; Robotics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Zero-Shot Generalization from Motion Demonstrations to New Tasks | Researchia