Back to Explorer
Research PaperResearchia:202603.03050[Artificial Intelligence > AI]

A Mixed Diet Makes DINO An Omnivorous Vision Encoder

Rishabh Kabra

Abstract

Pre-trained vision encoders like DINOv2 have demonstrated exceptional performance on unimodal tasks. However, we observe that their feature representations are poorly aligned across different modalities. For instance, the feature embedding for an RGB image and its corresponding depth map of the same scene exhibit a cosine similarity that is nearly identical to that of two random, unrelated images. To address this, we propose the Omnivorous Vision Encoder, a novel framework that learns a modality-agnostic feature space. We train the encoder with a dual objective: first, to maximize the feature alignment between different modalities of the same scene; and second, a distillation objective that anchors the learned representations to the output of a fully frozen teacher such as DINOv2. The resulting student encoder becomes "omnivorous" by producing a consistent, powerful embedding for a given scene, regardless of the input modality (RGB, Depth, Segmentation, etc.). This approach enables robust cross-modal understanding while retaining the discriminative semantics of the original foundation model.


Source: arXiv:2602.24181v1 - http://arxiv.org/abs/2602.24181v1 PDF: https://arxiv.org/pdf/2602.24181v1 Original Link: http://arxiv.org/abs/2602.24181v1

Submission:3/3/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

A Mixed Diet Makes DINO An Omnivorous Vision Encoder | Researchia | Researchia