Skip to yearly menu bar Skip to main content


Poster

AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction

Zhen Xing · Qi Dai · Zejia Weng · Zuxuan Wu · Yu-Gang Jiang


Abstract:

Text-guided video prediction (TVP) involves predicting the motion of future frames from the initial frame according to an instruction, which has wide applications in virtual reality, robotics, and content creation. Previous TVP methods make significant breakthroughs by adapting Stable Diffusion for this task. However, they struggle with frame consistency and temporal stability primarily due to the limited scale of video datasets.We observe that pretrained Image2Video diffusion models possess good video dynamics priors but lack fine-grained textual control.Hence, transferring pretrained models to leverage their video dynamic priors while injecting fine-grained control to generate controllable videos is both a meaningful and challenging task.To achieve this, we introduce the Multi-Modal Large Language Model (MLLM) to predict future video states based on initial frames and text instructions. More specifically, we design a dual query transformer (DQFormer) architecture, which integrates the instructions and frames into the conditional embeddings for future frame prediction. Additionally, we develop Temporal and Spatial Adapters that can quickly transfer general video diffusion models to specific scenarios with minimal training costs. Experimental results show that our method significantly outperforms state-of-the-art techniques on four datasets: Something Something V2, Epic Kitchen-100, Bridge Data, and UCF-101. Notably, AID achieves 91.2\% and 55.5\% FVD improvements on Bridge and SSv2 respectively, demonstrating its effectiveness in various domains.

Live content is unavailable. Log in and register to view live content