Skip to yearly menu bar Skip to main content


Poster

Generalization-Preserved Learning: Closing the Backdoor to Catastrophic Forgetting in Continual Deepfake Detection

Xueyi Zhang · Peiyin Zhu · Chengwei Zhang · Zhiyuan Yan · Jikang Cheng · Mingrui Lao · Siqi Cai · Yanming Guo


Abstract:

Existing continual deepfake detection methods typically treat stability (retaining previously learned forgery knowledge) and plasticity (adapting to novel forgeries) as conflicting properties, emphasizing an inherent trade-off between them, while regarding generalization to unseen forgeries as secondary. In contrast, we reframe the problem: stability and plasticity can coexist and be jointly improved through the model’s inherent generalization. Specifically, we propose Generalization-Preserved Learning (GPL), a novel framework consisting of two key components: (1) Hyperbolic Visual Alignment, which introduces learnable watermarks to align incremental data with the base set in hyperbolic space, alleviating inter-task distribution shifts; (2) Generalized Gradient Projection, which prevents parameter updates that conflict with generalization constraints, ensuring new knowledge learning does not interfere with previously acquired knowledge. Notably, GPL requires neither backbone retraining nor historical data storage. Experiments conducted on four mainstream datasets (FF++, Celeb-DF v2, DFD, and DFDCP) demonstrate that GPL achieves an accuracy of 92.14\%, outperforming replay-based state-of-the-art methods by 2.15\%, while reducing forgetting by 2.66\%. Moreover, GPL achieves an 18.38\% improvement on unseen forgeries using only 1\% of baseline parameters, thus presenting an efficient adaptation to continuously evolving forgery techniques.

Live content is unavailable. Log in and register to view live content