ExplorerRoboticsRobotics
Research PaperResearchia:202603.24083

Cross-Modal Reinforcement Learning for Navigation with Degraded Depth Measurements

Omkar Sawant

Abstract

This paper presents a cross-modal learning framework that exploits complementary information from depth and grayscale images for robust navigation. We introduce a Cross-Modal Wasserstein Autoencoder that learns shared latent representations by enforcing cross-modal consistency, enabling the system to infer depth-relevant features from grayscale observations when depth measurements are corrupted. The learned representations are integrated with a Reinforcement Learning-based policy for collision-f...

Submitted: March 24, 2026Subjects: Robotics; Robotics

Description / Details

This paper presents a cross-modal learning framework that exploits complementary information from depth and grayscale images for robust navigation. We introduce a Cross-Modal Wasserstein Autoencoder that learns shared latent representations by enforcing cross-modal consistency, enabling the system to infer depth-relevant features from grayscale observations when depth measurements are corrupted. The learned representations are integrated with a Reinforcement Learning-based policy for collision-free navigation in unstructured environments when depth sensors experience degradation due to adverse conditions such as poor lighting or reflective surfaces. Simulation and real-world experiments demonstrate that our approach maintains robust performance under significant depth degradation and successfully transfers to real environments.


Source: arXiv:2603.22182v1 - http://arxiv.org/abs/2603.22182v1 PDF: https://arxiv.org/pdf/2603.22182v1 Original Link: http://arxiv.org/abs/2603.22182v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 24, 2026
Topic:
Robotics
Area:
Robotics
Comments:
0
Bookmark