Back to Explorer
Research PaperResearchia:202601.29046[Computer Vision > Computer Vision]

DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation

Haozhe Xie

Abstract

Manipulating dynamic objects remains an open challenge for Vision-Language-Action (VLA) models, which, despite strong generalization in static manipulation, struggle in dynamic scenarios requiring rapid perception, temporal anticipation, and continuous control. We present DynamicVLA, a framework for dynamic object manipulation that integrates temporal reasoning and closed-loop adaptation through three key designs: 1) a compact 0.4B VLA using a convolutional vision encoder for spatially efficient, structurally faithful encoding, enabling fast multimodal inference; 2) Continuous Inference, enabling overlapping reasoning and execution for lower latency and timely adaptation to object motion; and 3) Latent-aware Action Streaming, which bridges the perception-execution gap by enforcing temporally aligned action execution. To fill the missing foundation of dynamic manipulation data, we introduce the Dynamic Object Manipulation (DOM) benchmark, built from scratch with an auto data collection pipeline that efficiently gathers 200K synthetic episodes across 2.8K scenes and 206 objects, and enables fast collection of 2K real-world episodes without teleoperation. Extensive evaluations demonstrate remarkable improvements in response speed, perception, and generalization, positioning DynamicVLA as a unified framework for general dynamic object manipulation across embodiments.


Source: arXiv:2601.22153v1 - http://arxiv.org/abs/2601.22153v1 PDF: https://arxiv.org/pdf/2601.22153v1 Original Link: http://arxiv.org/abs/2601.22153v1

Submission:1/29/2026
Comments:0 comments
Subjects:Computer Vision; Computer Vision
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

DynamicVLA: A Vision-Language-Action Model for Dynamic Object Manipulation | Researchia | Researchia