Back to Explorer
Research PaperResearchia:202603.10088[Robotics > Robotics]

Data Analogies Enable Efficient Cross-Embodiment Transfer

Jonathan Yang

Abstract

Generalist robot policies are trained on demonstrations collected across a wide variety of robots, scenes, and viewpoints. Yet it remains unclear how to best organize and scale such heterogeneous data so that it genuinely improves performance in a given target setting. In this work, we ask: what form of demonstration data is most useful for enabling transfer across robot set-ups? We conduct controlled experiments that vary end-effector morphology, robot platform appearance, and camera perspective, and compare the effects of simply scaling the number of demonstrations against systematically broadening the diversity in different ways. Our simulated experiments show that while perceptual shifts such as viewpoint benefit most from broad diversity, morphology shifts benefit far less from unstructured diversity and instead see the largest gains from data analogies, i.e. paired demonstrations that align scenes, tasks, and/or trajectories across different embodiments. Informed by the simulation results, we improve real-world cross-embodiment transfer success by an average of 22.5%22.5\% over large-scale, unpaired datasets by changing only the composition of the data.


Source: arXiv:2603.06450v1 - http://arxiv.org/abs/2603.06450v1 PDF: https://arxiv.org/pdf/2603.06450v1 Original Link: http://arxiv.org/abs/2603.06450v1

Submission:3/10/2026
Comments:0 comments
Subjects:Robotics; Robotics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!