Skip to yearly menu bar Skip to main content


Poster

InsViE-1M: Effective Instruction-based Video Editing with Elaborate Dataset Construction

Yuhui WU · Liyi Chen · Ruibin Li · Shihao Wang · Chenxi Xie · Lei Zhang


Abstract:

Instruction-based video editing allows effective and interactive editing of videos using only instructions without extra inputs such as masks or attributes. However, collecting high-quality training triplets (source video, edited video, instruction) is a challenging task. Existing datasets mostly consist of low-resolution, short duration, and limited amount of source videos with unsatisfactory editing quality, limiting the performance of trained editing models. In this work, we present a high-quality \textbf{Ins}truction-based \textbf{Vi}deo \textbf{E}diting dataset with \textbf{1M} triplets, namely \textbf{InsViE-1M}. We first curate high-resolution and high-quality source videos and images, then design an effective editing-filtering pipeline to construct high-quality editing triplets for model training. For a source video, we generate multiple edited samples of its first frame with different intensities of classifier-free guidance, which are automatically filtered by GPT-4o with carefully crafted guidelines. The edited first frame is propagated to subsequent frames to produce the edited video, followed by another round of filtering for frame quality and motion evaluation. We also generate and filter a variety of video editing triplets from high-quality images. With the InsViE-1M dataset, we propose a multi-stage learning strategy to train our InsViE model, progressively enhancing its instruction following and editing ability. Extensive experiments demonstrate the advantages of our InsViE-1M dataset and the trained model over state-of-the-art works. Data and code will be released.

Live content is unavailable. Log in and register to view live content