ExplorerComputer VisionComputer Vision
Research PaperResearchia:202604.24008

Context Unrolling in Omni Models

Ceyuan Yang

Abstract

We present Omni, a unified multimodal model natively trained on diverse modalities, including text, images, videos, 3D geometry, and hidden representations. We find that such training enables Context Unrolling, where the model explicitly reasons across multiple modal representations before producing predictions. This process enables the model to aggregate complementary information across heterogeneous modalities, facilitating a more faithful approximation of the shared multimodal knowledge manif...

Submitted: April 24, 2026Subjects: Computer Vision; Computer Vision

Description / Details

We present Omni, a unified multimodal model natively trained on diverse modalities, including text, images, videos, 3D geometry, and hidden representations. We find that such training enables Context Unrolling, where the model explicitly reasons across multiple modal representations before producing predictions. This process enables the model to aggregate complementary information across heterogeneous modalities, facilitating a more faithful approximation of the shared multimodal knowledge manifold and improving downstream reasoning fidelity. As a result, Omni achieves strong performance on both multimodal generation and understanding benchmarks, while demonstrating advanced multimodal reasoning capabilities, including in-context generation of text, image, video, and 3D geometry.


Source: arXiv:2604.21921v1 - http://arxiv.org/abs/2604.21921v1 PDF: https://arxiv.org/pdf/2604.21921v1 Original Link: http://arxiv.org/abs/2604.21921v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Apr 24, 2026
Topic:
Computer Vision
Area:
Computer Vision
Comments:
0
Bookmark