Skip to yearly menu bar Skip to main content


Poster

Less-to-More Generalization: Unlocking More Controllability by In-Context Generation

shaojin wu · Mengqi Huang · wenxu wu · Yufeng Cheng · Fei Ding · Qian HE


Abstract: Although subject-driven generation has been extensively explored in image generation due to its wide applications, it still has challenges in data scalability and subject expansibility. For the first challenge, moving from curating single-subject datasets to multiple-subject ones and scaling them is particularly difficult. For the second, most recent methods center on single-subject generation, making it hard to apply when dealing with multi-subject scenarios. In this study, we propose a highly-consistent data synthesis pipeline to address this challenge. It leverages the intrinsic in-context generation capabilities of diffusion transformers. Additionally, we introduce $UNO$, which consist of progressive cross-modal alignment and universal rotary position embedding. Extensive experiments show that our method can achieve high consistency while ensuring controllability in both single-subject and multi-subject driven generation.

Live content is unavailable. Log in and register to view live content