Skip to yearly menu bar Skip to main content


Poster

From Image to Video: An Empirical Study of Diffusion Representations

Pedro VĂ©lez · Luisa Polania Cabrera · Yi Yang · Chuhan Zhang · Rishabh Kabra · Anurag Arnab · Mehdi Sajjadi


Abstract:

Diffusion models have revolutionized generative modeling, enabling unprecedented realism in image and video synthesis.This success has sparked interest in leveraging their representations for visual understanding tasks. While recent works have explored this potential for image generation, the visual understanding capabilities of video diffusion models remain largely uncharted. To address this gap, we analyze the performance of latent image and video diffusion representations on various downstream tasks including image classification, action recognition, depth estimation, and tracking. For the most informative comparison, we utilize the same model architecture, WALT, across image and video generation objectives. Our results show that video generation pre-training consistently outperforms its image counterpart, though we find a striking range in the extent of this superiority. We further analyze features extracted from different layers and with varying noise levels, as well as the effect of model size and training budget on representation and generation quality. This work marks the first direct comparison of video and image diffusion objectives for visual understanding, offering insights into the role of temporal information in representation learning.

Live content is unavailable. Log in and register to view live content