Back to Explorer
Research PaperResearchia:202604.11009[Computer Vision > Computer Vision]

When Numbers Speak: Aligning Textual Numerals and Visual Instances in Text-to-Video Diffusion Models

Zhengyang Sun

Abstract

Text-to-video diffusion models have enabled open-ended video synthesis, but often struggle with generating the correct number of objects specified in a prompt. We introduce NUMINA , a training-free identify-then-guide framework for improved numerical alignment. NUMINA identifies prompt-layout inconsistencies by selecting discriminative self- and cross-attention heads to derive a countable latent layout. It then refines this layout conservatively and modulates cross-attention to guide regeneration. On the introduced CountBench, NUMINA improves counting accuracy by up to 7.4% on Wan2.1-1.3B, and by 4.9% and 5.5% on 5B and 14B models, respectively. Furthermore, CLIP alignment is improved while maintaining temporal consistency. These results demonstrate that structural guidance complements seed search and prompt enhancement, offering a practical path toward count-accurate text-to-video diffusion. The code is available at https://github.com/H-EmbodVis/NUMINA.


Source: arXiv:2604.08546v1 - http://arxiv.org/abs/2604.08546v1 PDF: https://arxiv.org/pdf/2604.08546v1 Original Link: http://arxiv.org/abs/2604.08546v1

Submission:4/11/2026
Comments:0 comments
Subjects:Computer Vision; Computer Vision
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

When Numbers Speak: Aligning Textual Numerals and Visual Instances in Text-to-Video Diffusion Models | Researchia