ExplorerRoboticsRobotics
Research PaperResearchia:202604.22058

LiveVLN: Breaking the Stop-and-Go Loop in Vision-Language Navigation

Xiangchen Wang

Abstract

Recent navigation systems achieve strong benchmark results, yet real-world deployment often remains visibly stop-and-go. This bottleneck arises because the sense-inference-execution loop is still blocking: after each new observation, the controller must wait for sensing, transmission, and inference before motion can continue. Reducing action-generation cost alone therefore does not remove redundant waiting. To address this issue, we present LiveVLN, a training-free framework for more continuous ...

Submitted: April 22, 2026Subjects: Robotics; Robotics

Description / Details

Recent navigation systems achieve strong benchmark results, yet real-world deployment often remains visibly stop-and-go. This bottleneck arises because the sense-inference-execution loop is still blocking: after each new observation, the controller must wait for sensing, transmission, and inference before motion can continue. Reducing action-generation cost alone therefore does not remove redundant waiting. To address this issue, we present LiveVLN, a training-free framework for more continuous embodied navigation by augmenting pretrained VLM navigators with multi-step action continuation. Instead of pausing for each full sense-and-inference round, LiveVLN overlaps execution with the processing of newly arrived observations, allowing refreshed future actions to be handed off before the current executable prefix is exhausted. This design keeps actions continuously available during motion, reducing idle waiting and enabling smoother online execution. The framework operates at runtime and can be integrated with compatible pretrained VLM navigators. Across R2R and RxR, LiveVLN preserves benchmark performance while reducing waiting time and improving action availability. In real-world deployments, it cuts average episode waiting time by up to 77.7%77.7\% and shortens wall-clock episode time by 12.6%12.6\% on StreamVLN and 19.6%19.6\% on NaVIDA, yielding more coherent execution during deployment. Code is available at https://github.com/NIneeeeeem/LiveVLN.


Source: arXiv:2604.19536v1 - http://arxiv.org/abs/2604.19536v1 PDF: https://arxiv.org/pdf/2604.19536v1 Original Link: http://arxiv.org/abs/2604.19536v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 22, 2026
Topic:
Robotics
Area:
Robotics
Comments:
0
Bookmark
LiveVLN: Breaking the Stop-and-Go Loop in Vision-Language Navigation | Researchia