ExplorerArtificial IntelligenceAI
Research PaperResearchia:202603.24061

Seeing is Improving: Visual Feedback for Iterative Text Layout Refinement

Junrong Guo

Abstract

Recent advances in Multimodal Large Language Models (MLLMs) have enabled automated generation of structured layouts from natural language descriptions. Existing methods typically follow a code-only paradigm that generates code to represent layouts, which are then rendered by graphic engines to produce final images. However, they are blind to the rendered visual outcome, making it difficult to guarantee readability and aesthetics. In this paper, we identify visual feedback as a critical factor in...

Submitted: March 24, 2026Subjects: AI; Artificial Intelligence

Description / Details

Recent advances in Multimodal Large Language Models (MLLMs) have enabled automated generation of structured layouts from natural language descriptions. Existing methods typically follow a code-only paradigm that generates code to represent layouts, which are then rendered by graphic engines to produce final images. However, they are blind to the rendered visual outcome, making it difficult to guarantee readability and aesthetics. In this paper, we identify visual feedback as a critical factor in layout generation and propose Visual Feedback Layout Model (VFLM), a self-improving framework that leverages visual feedback iterative refinement. VFLM is capable of performing adaptive reflective generation, which leverages visual information to reflect on previous issues and iteratively generates outputs until satisfactory quality is achieved. It is achieved through reinforcement learning with a visually grounded reward model that incorporates OCR accuracy. By rewarding only the final generated outcome, we can effectively stimulate the model's iterative and reflective generative capabilities. Experiments across multiple benchmarks show that VFLM consistently outperforms advanced MLLMs, existing layout models, and code-only baselines, establishing visual feedback as critical for design-oriented MLLMs. Our code and data are available at https://github.com/FolSpark/VFLM.


Source: arXiv:2603.22187v1 - http://arxiv.org/abs/2603.22187v1 PDF: https://arxiv.org/pdf/2603.22187v1 Original Link: http://arxiv.org/abs/2603.22187v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Mar 24, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark
Seeing is Improving: Visual Feedback for Iterative Text Layout Refinement | Researchia