Learning Visuomotor Policy for Multi-Robot Laser Tag Game
Abstract
In this paper, we study multi robot laser tag, a simplified yet practical shooting-game-style task. Classic modular approaches on these tasks face challenges such as limited observability and reliance on depth mapping and inter robot communication. To overcome these issues, we present an end-to-end visuomotor policy that maps images directly to robot actions. We train a high performing teacher policy with multi agent reinforcement learning and distill its knowledge into a vision-based student policy. Technical designs, including a permutation-invariant feature extractor and depth heatmap input, improve performance over standard architectures. Our policy outperforms classic methods by 16.7% in hitting accuracy and 6% in collision avoidance, and is successfully deployed on real robots. Code will be released publicly.
Source: arXiv:2603.11980v1 - http://arxiv.org/abs/2603.11980v1 PDF: https://arxiv.org/pdf/2603.11980v1 Original Link: http://arxiv.org/abs/2603.11980v1