Skip to yearly menu bar Skip to main content


Poster

Hate in Plain Sight: On the Risks of Moderating AI-Generated Hateful Illusions

Yiting Qu · Ziqing Yang · Yihan Ma · Michael Backes · Savvas Zannettou · Yang Zhang


Abstract:

Recent advances in text-to-image diffusion models have enabled the creation of a new form of digital art: optical illusions---visual tricks that create different perceptions of reality.However, adversaries may misuse such techniques to generate hateful illusions, which embed specific hate messages into harmless scenes and disseminate them across web communities.In this work, we take the first step toward investigating the risks of scalable hateful illusion generation and the potential for bypassing current content moderation models.Specifically, we generate 1,860 optical illusions using Stable Diffusion and ControlNet, conditioned on 62 hate messages.Of these, 1,571 are hateful illusions that successfully embed hate messages, either overtly or subtly, forming the Hateful Illusion dataset.Using this dataset, we evaluate the performance of six moderation classifiers and nine vision language models (VLMs) in identifying hateful illusions.Experimental results reveal significant vulnerabilities in existing moderation models: the detection accuracy falls below 0.245 for moderation classifiers and below 0.102 for VLMs.We further identify a critical limitation in their vision encoders, which mainly focus on surface-level image details while overlooking the secondary layer of information, i.e., hidden messages.To address such risks, we demonstrate that preprocessing transformations combining Gaussian blur and histogram equalization can substantially enhance moderation performance.

Live content is unavailable. Log in and register to view live content