Context Unrolling in Omni Models
Abstract
We present Omni, a unified multimodal model natively trained on diverse modalities, including text, images, videos, 3D geometry, and hidden representations. We find that such training enables Context Unrolling, where the model explicitly reasons across multiple modal representations before producing predictions. This process enables the model to aggregate complementary information across heterogeneous modalities, facilitating a more faithful approximation of the shared multimodal knowledge manif...
Description / Details
We present Omni, a unified multimodal model natively trained on diverse modalities, including text, images, videos, 3D geometry, and hidden representations. We find that such training enables Context Unrolling, where the model explicitly reasons across multiple modal representations before producing predictions. This process enables the model to aggregate complementary information across heterogeneous modalities, facilitating a more faithful approximation of the shared multimodal knowledge manifold and improving downstream reasoning fidelity. As a result, Omni achieves strong performance on both multimodal generation and understanding benchmarks, while demonstrating advanced multimodal reasoning capabilities, including in-context generation of text, image, video, and 3D geometry.
Source: arXiv:2604.21921v1 - http://arxiv.org/abs/2604.21921v1 PDF: https://arxiv.org/pdf/2604.21921v1 Original Link: http://arxiv.org/abs/2604.21921v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Apr 24, 2026
Computer Vision
Computer Vision
0