Skip to yearly menu bar Skip to main content


Poster

Open-Unfairness Adversarial Mitigation for Generalized Deepfake Detection

Zhaoyang Li · Zhu Teng · Baopeng Zhang · Jianping Fan


Abstract:

Deepfake detection methods are becoming increasingly crucial for identity security and have recently been employed to support legal proceedings. However, these methods often exhibit unfairness due to flawed logical reasoning, undermining the reliability of their predictions and raising concerns about their applicability in legal contexts. To mitigate this bias, existing approaches typically rely on predefined demographic attributes, such as race and gender. However, these assumptions are inherently limited, as different deepfake detectors exhibit substantial variations in fairness performance, often uncovering intricate and unforeseen bias patterns. To this end, we propose the Adversarial Open-Unfairness Discovery and Mitigation Network (AdvOU), a novel framework designed to mitigate unpredictable unfairness in deepfake detection. Our approach strengthens general deepfake detectors by equipping them with a lightweight Unfairness Regulator (UR), which dynamically identifies and mitigates bias. Furthermore, we propose an adversarial learning paradigm that alternates between the training of the Open-Unfairness Discovery (OUD) module and the Unfairness Adversarial Mitigation (UAM) module. The former intensifies unfairness within UR to reveal underlying bias patterns, while the latter leverages fairness in the detector by enforcing adversarial robustness against unfairness. Extensive experiments on widely used deepfake datasets validate the effectiveness of our approach, outperforming state-of-the-art methods in both fairness and generalization evaluations for cross-domain deepfake detection. The code is available at [link].

Live content is unavailable. Log in and register to view live content