Skip to yearly menu bar Skip to main content


Poster

Versatile Transition Generation with Image-to-Video Diffusion

Zuhao Yang · Jiahui Zhang · Yingchen Yu · Shijian Lu · Song Bai


Abstract:

Leveraging text, images, structure maps, or motion trajectories as conditional guidance, diffusion models have achieved great success in automated and high-quality video generation. However, generating smooth and rational transition videos given the first and last video frames as well as descriptive text prompts is far underexplored. We present VTG, a Versatile Transition video Generation framework that can generate smooth, high-fidelity, and semantic-coherent video transitions.VTG introduces interpolation-based initialization that helps preserve object identity and handle abrupt content changes effectively. In addition, it incorporates dual-directional motion fine-tuning and representation alignment regularization that mitigate the limitations of the pre-trained image-to-video diffusion models in motion smoothness and generation fidelity, respectively.To evaluate VTG and facilitate future studies on unified transition generation, we collected TransitBench, a comprehensive benchmark for transition generation that covers two representative transition tasks including concept blending and scene transition. Extensive experiments show that VTG achieves superior transition performance consistently across the four tasks. Our codes and data will be released.

Live content is unavailable. Log in and register to view live content