Poster
Prototype Guided Backdoor Defense
Venkat Adithya Amula · Sunayana Samavedam · Saurabh Saini · Avani Gupta · P Narayanan
Deep learning models are susceptible to {\em backdoor attacks} involving malicious attackers perturbing a small subset of training data with a {\em trigger} to causes misclassifications. Various triggers have been used including semantic triggers that are easily realizable without requiring attacker to manipulate the image. The emergence of generative AI has eased generation of varied poisoned samples. Robustness across types of triggers is crucial to effective defense. We propose Prototype Guided Backdoor Defense (PGBD), a robust post-hoc defense that scales across different trigger types, including previously unsolved semantic triggers. PGBD exploits displacements in the geometric spaces of activations to penalize movements towards the trigger. This is done using a novel sanitization loss of a post-hoc fine-tuning step. The geometric approach scales easily to all types of attacks. PGBD achieves better performance across all settings. We also present the first defense against a new semantic attack on celebrity face images.
Live content is unavailable. Log in and register to view live content