Back to Explorer
Research PaperResearchia:202603.16046[Artificial Intelligence > AI]

Geometry-Guided Camera Motion Understanding in VideoLLMs

Haoan Feng

Abstract

Camera motion is a fundamental geometric signal that shapes visual perception and cinematic style, yet current video-capable vision-language models (VideoLLMs) rarely represent it explicitly and often fail on fine-grained motion primitives. We address this gap with a framework of benchmarking\textbf{benchmarking}, diagnosis\textbf{diagnosis}, and injection\textbf{injection}. We curate CameraMotionDataset\textbf{CameraMotionDataset}, a large-scale synthetic dataset with explicit camera control, formulate camera motion as constraint-aware multi-label recognition, and construct a VQA benchmark--CameraMotionVQA\textbf{CameraMotionVQA}. Across diverse off-the-shelf VideoLLMs, we observe substantial errors in recognizing camera motion primitives. Probing experiments on a Qwen2.5-VL vision encoder suggest that camera motion cues are weakly represented, especially in deeper ViT blocks, helping explain the observed failure modes. To bridge this gap without costly training or fine-tuning, we propose a lightweight, model-agnostic pipeline that extracts geometric camera cues from 3D foundation models (3DFMs), predicts constrained motion primitives with a temporal classifier, and injects them into downstream VideoLLM inference via structured prompting. Experiments demonstrate improved motion recognition and more camera-aware model responses, highlighting geometry-driven cue extraction and structured prompting as practical steps toward a camera-aware VideoLLM and VLA system. The dataset and benchmark is publicly available at https://hf.co/datasets/fengyee/camera-motion-dataset-and-benchmark.


Source: arXiv:2603.13119v1 - http://arxiv.org/abs/2603.13119v1 PDF: https://arxiv.org/pdf/2603.13119v1 Original Link: http://arxiv.org/abs/2603.13119v1

Submission:3/16/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!