Back to Explorer
Research PaperResearchia:202603.12083[Robotics > Robotics]

RL-Augmented MPC for Non-Gaited Legged and Hybrid Locomotion

Andrea Patrizi

Abstract

We propose a contact-explicit hierarchical architecture coupling Reinforcement Learning (RL) and Model Predictive Control (MPC), where a high-level RL agent provides gait and navigation commands to a low-level locomotion MPC. This offloads the combinatorial burden of contact timing from the MPC by learning acyclic gaits through trial and error in simulation. We show that only a minimal set of rewards and limited tuning are required to obtain effective policies. We validate the architecture in simulation across robotic platforms spanning 50 kg to 120 kg and different MPC implementations, observing the emergence of acyclic gaits and timing adaptations in flat-terrain legged and hybrid locomotion, and further demonstrating extensibility to non-flat terrains. Across all platforms, we achieve zero-shot sim-to-sim transfer without domain randomization, and we further demonstrate zero-shot sim-to-real transfer without domain randomization on Centauro, our 120 kg wheeled-legged humanoid robot. We make our software framework and evaluation results publicly available at https://github.com/AndrePatri/AugMPC.


Source: arXiv:2603.10878v1 - http://arxiv.org/abs/2603.10878v1 PDF: https://arxiv.org/pdf/2603.10878v1 Original Link: http://arxiv.org/abs/2603.10878v1

Submission:3/12/2026
Comments:0 comments
Subjects:Robotics; Robotics
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

RL-Augmented MPC for Non-Gaited Legged and Hybrid Locomotion | Researchia