paint-brush
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis: FlowVidby@kinetograph

FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis: FlowVid

tldt arrow

Too Long; Didn't Read

This paper proposes a consistent V2V synthesis framework by jointly leveraging spatial conditions and temporal optical flow clues within the source video.
featured image - FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis: FlowVid
Kinetograph: The Video Editing Technology Publication HackerNoon profile picture

(1) Feng Liang, The University of Texas at Austin and Work partially done during an internship at Meta GenAI (Email: [email protected]);

(2) Bichen Wu, Meta GenAI and Corresponding author;

(3) Jialiang Wang, Meta GenAI;

(4) Licheng Yu, Meta GenAI;

(5) Kunpeng Li, Meta GenAI;

(6) Yinan Zhao, Meta GenAI;

(7) Ishan Misra, Meta GenAI;

(8) Jia-Bin Huang, Meta GenAI;

(9) Peizhao Zhang, Meta GenAI (Email: [email protected]);

(10) Peter Vajda, Meta GenAI (Email: [email protected]);

(11) Diana Marculescu, The University of Texas at Austin (Email: [email protected]).

4. FlowVid

For video-to-video generation, given an input video with N frames I = {I1, . . . , IN } and a text prompt τ , the goal is transfer it to a new video I ′ = {I ′ 1 , . . . , I′ N } which adheres to the provided prompt τ ′ , while keeping consistency across frame. We first discuss how we inflate the image-to-image diffusion model, such as ControlNet to video, with spatialtemporal attention [6, 25, 35, 46] (Section 4.1) Then, we introduce how to incorporate imperfect optical flow as a condition into our model (Section 4.2). Lastly, we introduce the edit-propagate design for generation (Section 4.3).


This paper is available on arxiv under CC 4.0 license.