Skip to yearly menu bar Skip to main content


Poster

NAPPure: Adversarial Purification for Robust Image Classification under Non-Additive Perturbations

Junjie Nan · Jianing Li · Wei Chen · Mingkun Zhang · Xueqi Cheng


Abstract:

Adversarial purification has achieved great success in combating adversarial image perturbations, which are usually assumed to be additive. However, non-additive adversarial perturbations such as blur, occlusion, and distortion are also common in the real world. Under such perturbations, existing adversarial purification methods are much less effective since they are designed to fit the additive nature. In this paper, we propose an extended adversarial purification framework named NAPPure, which can further handle non-additive perturbations. Specifically, we first establish the generation process of an adversarial image, and then disentangle the underlying clean image and perturbation parameters through likelihood maximization. Experiments on GTSRB and CIFAR-10 datasets show that NAPPure significantly boosts the robustness of image classification models against non-additive perturbations.

Live content is unavailable. Log in and register to view live content