Skip to yearly menu bar Skip to main content


Poster

Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts

Hongcheng Gao · Tianyu Pang · Chao Du · Taihang Hu · Zhijie Deng · Min Lin


Abstract:

With the rapid progress of diffusion models (DMs), significant efforts are being made to unlearn harmful or copyrighted concepts from pretrained DMs to prevent potential model misuse. However, it is observed that even when DMs are properly unlearned before release, malicious finetuning can compromise this process, causing DMs to relearn the unlearned concepts. This occurs partly because certain benign concepts (e.g., ''skin'') retained in DMs are related to the unlearned ones (e.g., ''nudity''), facilitating their relearning via finetuning. To address this, we propose meta-unlearning on DMs. Intuitively, a meta-unlearned DM should behave like an unlearned DM when used as is; moreover, if the meta-unlearned DM undergoes malicious finetuning on unlearned concepts, the related benign concepts retained within it will be triggered to self-destruct, hindering the relearning of unlearned concepts. Our meta-unlearning framework is compatible with most existing unlearning methods, requiring only the addition of an easy-to-implement meta objective. We validate our approach through empirical experiments on meta-unlearning concepts from Stable Diffusion models (SD-v1-4 and SDXL), supported by extensive ablation studies.

Live content is unavailable. Log in and register to view live content