Poster
IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features
Anand Kumar · Jiteng Mu · Nuno Vasconcelos
Text-to-image (T2I) models have gained widespread adoption among content creators and the general public. However, this has sparked significant concerns among artists regarding data privacy and copyright infringement. Gradually, there is an increasing demand for T2I models to incorporate mechanisms that prevent the generation of specific artistic styles, thereby safeguarding intellectual property rights. Existing methods for style extraction typically necessitate the collection of custom datasets and the training of specialized models. This, however, is resource-intensive, time-consuming, and often impractical for real-time applications. Moreover, it may not adequately address the dynamic nature of artistic styles and the rapidly evolving landscape of digital art. We present a novel, training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone, without any external modules or retraining. This is denoted as Introspective Style attribution (IntroStyle) and is shown to perform superior to state-of-the-art models for style retrieval. We also introduce a synthetic Artistic Style Split (ArtSplit) dataset to isolate artistic style and evaluate fine-grained style attribution performance.
Live content is unavailable. Log in and register to view live content