Skip to yearly menu bar Skip to main content


Poster

When and Where do Data Poisons Attack Textual Inversion?

Jeremy Styborski · Mingzhi Lyu · Jiayou Lu · Nupur Kapur · Adams Kong


Abstract:

Poisoning attacks pose significant challenges to the robustness of diffusion models (DMs). In this paper, we systematically analyze when and where poisoning affects textual inversion, a widely used personalization technique for DMs. We first introduce Semantic Sensitivity Maps (SSM), a novel method for visualizing the influence of poisoning on text embeddings. Second, we identify and experimentally verify that DMs exhibit non-uniform learning behavior across timesteps, focusing on lower-noise samples. Poisoning attacks inherit this bias and inject adversarial signals predominantly at lower timesteps. Third, we find that adversarial signals distract DM learning away from relevant regions within training data, ultimately degrading textual inversion quality. Based on these insights, we propose Safe-Zone Training (SZT), a novel defense mechanism comprised of 3 key components: (1) JPEG compression to weaken high-frequency poison signals, (2) restriction to higher timesteps during textual inversion training to avoid adversarial signals at lower timesteps, and (3) loss masking to constrain learning to relevant regions. Extensive experiments across multiple poisoning methods demonstrate that SZT significantly enhances the robustness of textual inversion against all poisoning attacks, improving average DINOv2 similarity across poisons to 0.43, compared to prior published defenses at 0.26. We will publish code and datasets upon acceptance.

Live content is unavailable. Log in and register to view live content