Poster
FastJSMA: Accelerating Jacobian-based Saliency Map Attacks through Gradient Decoupling
Zhenghao Gao · Shengjie Xu · Zijing Li · Meixi Chen · Chaojian Yu · Yuanjie Shao · Changxin Gao
[
Abstract
]
Abstract:
Adversarial attack plays a critical role in evaluating the robustness of deep learning models. Jacobian-based Saliency Map Attack (JSMA) is an interpretable adversarial method that offers excellent pixel-level control and provides valuable insights into model vulnerabilities. However, its quadratic computational complexity $O(M^2 \times N)$ renders it impractical for large-scale datasets, limiting its application despite its inherent value. This paper proposes FastJSMA, an efficient attack method that addresses these computational limitations. Our approach introduces a gradient decoupling mechanism that decomposes the Jacobian calculation into complementary class suppression ($g^-$) and class excitation ($g^+$) gradients, reducing complexity to $O(M\sqrt{N})$. Additionally, we implement a class probing mechanism and an adaptive saliency threshold to further optimize the process. Experimental results across multiple datasets demonstrate that FastJSMA maintains high attack success rates (98.4\% relative efficiency) while dramatically reducing computation time—requiring only 1.8\% of JSMA's processing time on CIFAR-100 and successfully operating on ImageNet where traditional JSMA fails due to memory constraints. This advancement enables the practical application of interpretable saliency map-based attacks on large-scale datasets, balancing effectiveness with computational efficiency.
Live content is unavailable. Log in and register to view live content