Poster
Scaling and Taming Adversarial Training with Synthetic Data
Juntao Wu · Xianting Huang · Yu Chen · Shuai Pang · Ke Wang
Despite the success of adversarial training on small datasets, applying it to large-scale datasets like ImageNet remains challenging. Previous attempts using synthetic data show limited improvements. This work investigates the impact of synthetic data scaling, model scaling, and training strategies on adversarial training with ImageNet, providing deeper insights into large-scale robustness. During the process, we observe a notable phenomenon of loss oscillation, leading to adversarial overfitting, and propose strategies to mitigate it. Experimental results show that, under AutoAttack on ImageNet-1K, our method achieves a robust accuracy of 71.54\%. Our findings highlight the crucial role of synthetic data and model scaling in enhancing adversarial robustness on large-scale benchmarks and provide a new direction for training robust visual representations at scale.
Live content is unavailable. Log in and register to view live content