Poster
ReferEverything: Towards Segmenting Everything We Can Speak of in Videos
Anurag Bagchi · Zhipeng Bao · Yu-Xiong Wang · Pavel Tokmakov · Martial Hebert
We present REM, a framework for segmenting a wide range of concepts in video that can be described through natural language. Our method unlocks the universal visual-language mapping learned by video diffusion models on Internet-scale data by fine-tuning them on small-scale Referring Object Segmentation datasets. Our key insight is preserving the entirety of the generative model's architecture by shifting its objective from predicting noise to predicting mask latents. The resulting model can accurately segment and track rare and unseen objects, despite only being trained on object masks from a limited set of categories. Additionally, it can effortlessly generalize to non-object dynamic concepts, such as smoke or raindrops, as demonstrated in our newly introduced benchmark for Referring Video Process Segmentation (Ref-VPS). Our experiments show that REM performs on par with state-of-the-art approaches on in-domain datasets, like Ref-DAVIS, while outperforming them by up to 11 points in terms of region similarity out-of-domain, leveraging the power of Internet-scale pre-training.
Live content is unavailable. Log in and register to view live content