Poster
Attention to Neural Plagiarism: Diffusion models Can Plagiarize Your Copyrighted Images!
zihang zou · Boqing Gong · Liqiang Wang
In this paper, we highlight a critical threat posed by emerging neural models—data plagiarism. We demonstrate how modern neural models (\eg, diffusion models) can effortlessly replicate copyrighted images, even when protected by advanced watermarking techniques. To expose the vulnerability in copyright protection and facilitate future research, we propose a general approach regarding neural plagiarism that can either forge replicas of copyrighted data or introduce copyright ambiguity. Our method, based on ``anchors and shims'', employs inverse latents as anchors and finds shim perturbations that can gradually deviate the anchor latents, thereby evading watermark or copyright detection. By applying perturbation to the cross-attention mechanism at different timesteps, our approach induces varying degrees of semantic modifications in copyrighted images, making it to bypass protections ranging from visible trademarks, signatures to invisible watermarks. Notably, our method is a purely gradient-based search that requires no additional training or fine-tuning. Empirical experiments on MS-COCO and real-world copyrighted images show that diffusion models can replicate copyrighted images, underscoring the urgent need for countermeasures against neural plagiarism.
Live content is unavailable. Log in and register to view live content