Egocentric Tactile and Proximity Sensors as Observation Priors for Humanoid Collision Avoidance
Abstract
Collision-free motion is often aided by tactile and proximity sensors distributed on the body of the robot due to their resistance to occlusion as opposed to external cameras. However, how to shape the sensor's properties, such as sensing coverage; type; and range, to enable avoidant behavior remains unclear. In this work, we present a reinforcement learning framework for whole-body collision avoidance on a humanoid H1-2 robot and use it to characterize how sensor properties shape learned avoida...
Description / Details
Collision-free motion is often aided by tactile and proximity sensors distributed on the body of the robot due to their resistance to occlusion as opposed to external cameras. However, how to shape the sensor's properties, such as sensing coverage; type; and range, to enable avoidant behavior remains unclear. In this work, we present a reinforcement learning framework for whole-body collision avoidance on a humanoid H1-2 robot and use it to characterize how sensor properties shape learned avoidance behavior. Using dodgeball as a benchmark task, we ablate the properties of sensors distributed across the upper body of the robot and find that raw proximity measurements can substitute for explicit object localization provided the sensing range is sufficient and that sparse non-directional proximity signals outpace dense directional alternatives in sample efficiency.
Source: arXiv:2604.25554v1 - http://arxiv.org/abs/2604.25554v1 PDF: https://arxiv.org/pdf/2604.25554v1 Original Link: http://arxiv.org/abs/2604.25554v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 29, 2026
Robotics
Robotics
0