Skip to yearly menu bar Skip to main content


Poster

Adversarial Robust Memory-Based Continual Learner

Xiaoyue Mi · Fan Tang · Zonghan Yang · Danding Wang · Juan Cao · Peng Li · Yang Liu


Abstract:

Despite the remarkable advances that have been made in continual learning, the adversarial vulnerability of such methods has not been fully discussed. We delve into the adversarial robustness of memory-based continual learning algorithms and observe limited robustness improvement by directly applying adversarial training techniques. Our preliminary studies reveal the twin challenges for building adversarial robust continual learners: \textbf{accelerated forgetting} in continual learning and \textbf{gradient obfuscation} in adversarial robustness. In this study, we put forward a novel adversarial robust memory-based continual learner that adjusts data logits to mitigate the forgetting of pasts caused by adversarial samples. Furthermore, we devise a gradient-based data selection mechanism to overcome the gradient obfuscation caused by limited stored data. The proposed approach can widely integrate with existing memory-based continual learning and adversarial training algorithms in a plug-and-play way. Extensive experiments on Split-CIFAR10/100 and Split-Tiny-ImageNet demonstrate the effectiveness of our approach, achieving a maximum forgetting reduction of 34.17% in adversarial data for ResNet, and 20.10% for ViT.

Live content is unavailable. Log in and register to view live content