Abstract
Adversarial examples are usually generated by adding adversarial perturbations on clean samples, designed to deceive the model to make wrong classifications. Adversarial robustness refers to the ability of a model to resist adversarial attacks. And currently, a mainstream method to enhance adversarial robustness is the Projected Gradient Descent (PGD). However, PGD is often criticized for being time-consuming during constructing adversarial examples. Fast adversarial training can improve the adversarial robustness in shorter time, but it only can train for a limited number of epochs, leading to sub-optimal performance. This paper demonstrates that the multi-exit network can reduce the impact of adversarial perturbations by outputting easily identified samples at early exits. Therefore, we can improve the adversarial robustness. Further, we find that the multi-exit network can prevent catastrophic overfitting existing in single-step adversarial training. Specifically, we find that, in the multi-exit network, (1) the norm of weights at a fully connected layer in a non-overfitted exit is much smaller than that in an overfitted exit; and (2) catastrophic overfitting occurs when the late exits have weight norms larger than the early exits. Based on these findings, we propose an approach to alleviating the catastrophic overfitting of the multi-exit network. Compared to PGD adversarial training, our approach can train a model with decreased time complexity and increased empirical robustness. Extensive experiments have been conducted to evaluate our approach against various adversarial attacks, and the experimental results demonstrate superior robustness accuracies on CIFAR-10, CIFAR-100 and SVHN.
Original language | English |
---|---|
Pages (from-to) | 1-11 |
Number of pages | 11 |
Journal | Neural Networks |
Volume | 150 |
Early online date | 25 Feb 2022 |
DOIs | |
Publication status | Published - Jun 2022 |
Externally published | Yes |
Bibliographical note
This work was supported in part by Natural Science Foundation of China (Grants 61732011 , 62176160 , 61976141 ), the Natural Science Foundation of Shenzhen, China (University Stability Support Program no. 20200804193857002 ), and in part by the Interdisciplinary Innovation Team of Shenzhen University, China .Keywords
- Adversarial defense
- Adversarial robustness
- Fast adversarial training
- Multi-exit network