Back to Explorer
Research PaperResearchia:202601.29035[Machine Learning > Machine Learning]

EditYourself: Audio-Driven Generation and Manipulation of Talking Head Videos with Diffusion Transformers

John Flynn

Abstract

Current generative video models excel at producing novel content from text and image prompts, but leave a critical gap in editing existing pre-recorded videos, where minor alterations to the spoken script require preserving motion, temporal coherence, speaker identity, and accurate lip synchronization. We introduce EditYourself, a DiT-based framework for audio-driven video-to-video (V2V) editing that enables transcript-based modification of talking head videos, including the seamless addition, removal, and retiming of visually spoken content. Building on a general-purpose video diffusion model, EditYourself augments its V2V capabilities with audio conditioning and region-aware, edit-focused training extensions. This enables precise lip synchronization and temporally coherent restructuring of existing performances via spatiotemporal inpainting, including the synthesis of realistic human motion in newly added segments, while maintaining visual fidelity and identity consistency over long durations. This work represents a foundational step toward generative video models as practical tools for professional video post-production.


Source: arXiv:2601.22127v1 - http://arxiv.org/abs/2601.22127v1 PDF: https://arxiv.org/pdf/2601.22127v1 Original Link: http://arxiv.org/abs/2601.22127v1

Submission:1/29/2026
Comments:0 comments
Subjects:Machine Learning; Machine Learning
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

EditYourself: Audio-Driven Generation and Manipulation of Talking Head Videos with Diffusion Transformers | Researchia