Skip to yearly menu bar Skip to main content


Poster

CIARD: Cyclic Iterative Adversarial Robustness Distillation

Liming Lu · Shuchao Pang · Xu Zheng · Xiang GU · Anan Du · Yunhuai Liu · Yongbin Zhou


Abstract: Adversarial robustness distillation (ARD) aims to transfer both performance and robustness from teacher model to lightweight student model, enabling resilient performance on resource-constrained scenarios. Though existing ARD approaches enhance student model's robustness, the inevitable by-product leads to the degraded performance on clean examples. We summarize the causes of this problem inherent in existing methods with dual-teacher framework as: ① The divergent optimization objectives of dual-teacher models, i.e., the clean and robust teachers, impede effective knowledge transfer to the student model, and ② The iteratively generated adversarial examples during training lead to performance deterioration of the robust teacher model. To address these challenges, we propose a novel Cyclic Iterative ARD (CIARD) method with two key innovations: ① A multi-teacher framework with contrastive push-loss alignment to resolve conflicts in dual-teacher optimization objectives, and ② Continuous adversarial retraining to maintain dynamic teacher robustness against performance degradation from the varying adversarial examples. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CIARD achieves remarkable performance with an average $\textbf{3.53\%}$ improvement in adversarial defense rates across various attack scenarios and a $\textbf{5.87\%}$ increase in clean sample accuracy, establishing a new benchmark for balancing model robustness and generalization. Our code is available at https://github.com/CIARD2025/CIARD.

Live content is unavailable. Log in and register to view live content