Cross-Modal Reinforcement Learning for Navigation with Degraded Depth Measurements
Abstract
This paper presents a cross-modal learning framework that exploits complementary information from depth and grayscale images for robust navigation. We introduce a Cross-Modal Wasserstein Autoencoder that learns shared latent representations by enforcing cross-modal consistency, enabling the system to infer depth-relevant features from grayscale observations when depth measurements are corrupted. The learned representations are integrated with a Reinforcement Learning-based policy for collision-free navigation in unstructured environments when depth sensors experience degradation due to adverse conditions such as poor lighting or reflective surfaces. Simulation and real-world experiments demonstrate that our approach maintains robust performance under significant depth degradation and successfully transfers to real environments.
Source: arXiv:2603.22182v1 - http://arxiv.org/abs/2603.22182v1 PDF: https://arxiv.org/pdf/2603.22182v1 Original Link: http://arxiv.org/abs/2603.22182v1