Back to Explorer
Research PaperResearchia:202603.24063[Data Science > Machine Learning]

The Dual Mechanisms of Spatial Reasoning in Vision-Language Models

Kelly Cui

Abstract

Many multimodal tasks, such as image captioning and visual question answering, require vision-language models (VLMs) to associate objects with their properties and spatial relations. Yet it remains unclear where and how such associations are computed within VLMs. In this work, we show that VLMs rely on two concurrent mechanisms to represent such associations. In the language model backbone, intermediate layers represent content-independent spatial relations on top of visual tokens corresponding to objects. However, this mechanism plays only a secondary role in shaping model predictions. Instead, the dominant source of spatial information originates in the vision encoder, whose representations encode the layout of objects and are directly exploited by the language model backbone. Notably, this spatial signal is distributed globally across visual tokens, extending beyond object regions into surrounding background areas. We show that enhancing these vision-derived spatial representations globally across all image tokens improves spatial reasoning performance on naturalistic images. Together, our results clarify how spatial association is computed within VLMs and highlight the central role of vision encoders in enabling spatial reasoning.


Source: arXiv:2603.22278v1 - http://arxiv.org/abs/2603.22278v1 PDF: https://arxiv.org/pdf/2603.22278v1 Original Link: http://arxiv.org/abs/2603.22278v1

Submission:3/24/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

The Dual Mechanisms of Spatial Reasoning in Vision-Language Models | Researchia