Skip to yearly menu bar Skip to main content


Poster

AIComposer: Any Style and Content Image Composition via Feature Integration

Haowen Li · Zhenfeng Fan · Zhang Wen · Zhengzhou Zhu · Yunjin Li


Abstract: Image composition has advanced significantly with large-scale pre-trained T2I diffusion models. Despite progress in same-domain composition, cross-domain composition remains under-explored. The main challenges are the stochastic nature of diffusion models and the style gap between input images, leading to failures and artifacts. Additionally, heavy reliance on text prompts limits practical applications.This paper presents the first cross-domain image composition method that does not require text prompts, allowing natural stylization and seamless compositions. Our method is efficient and robust, preserving the diffusion prior, as it involves minor steps after initial image blending without additional interference in the diffusion process. Our method uses a multilayer perceptron to integrate CLIP features from foreground and background images, manipulating diffusion steps with a cross-attention strategy. It effectively preserves foreground content while enabling stable stylization without a pre-stylization network. We also create a benchmark dataset with diverse contents and styles for fair evaluation, addressing the lack of testing datasets for cross-domain image composition.Our method outperforms state-of-the-art techniques in both qualitative and quantitative evaluations, reducing LPIPS scores by $30.5$\% and improving CSD metrics by $18.1$\%. We believe our method will advance future research and applications. The code and benchmark will be publicly available.

Live content is unavailable. Log in and register to view live content