Poster
VideoAuteur: Towards Long Narrative Video Generation - A case study in How-to-Cook Videos
Junfei Xiao · Feng Cheng · Lu Qi · Liangke Gui · Yang Zhao · Shanchuan Lin · Jiepeng Cen · Zhibei Ma · Alan Yuille · Lu Jiang
Recent video generation models have shown promising results in producing high-quality video clips lasting several seconds. However, these models face challenges in generating long sequences that convey clear and informative events, limiting their ability to support coherent narrations. In this paper, we present a large-scale cooking video dataset designed to advance long-form narrative generation in the cooking domain. We validate the quality of our proposed dataset in terms of visual fidelity and textual caption accuracy using state-of-the-art Vision-Language Models (VLMs) and video generation models, respectively. We further introduce a Long Narrative Video Director to enhance both visual and semantic coherence in generated videos and emphasize the role of aligning visual embeddings to achieve improved overall video quality. Our method demonstrates substantial improvements in generating visually detailed and semantically aligned keyframes, supported by finetuning techniques that integrate text and image embeddings within the video generation process. Codes and data will be made publicly available.
Live content is unavailable. Log in and register to view live content