Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet.

Journal: IEEE transactions on pattern analysis and machine intelligence
Published Date:

Abstract

Adversarial attacks on deep neural networks (DNNs) have been found for several years. However, the existing adversarial attacks have high success rates only when the information of the victim DNN is well-known or could be estimated by the structure similarity or massive queries. In this paper, we propose to Attack on Attention (AoA), a semantic property commonly shared by DNNs. AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss. Since AoA alters the loss function only, it could be easily combined with other transferability-enhancement techniques and then achieve SOTA performance. We apply AoA to generate 50000 adversarial samples from ImageNet validation set to defeat many neural networks, and thus name the dataset as DAmageNet. 13 well-trained DNNs are tested on DAmageNet, and all of them have an error rate over 85 percent. Even with defenses or adversarial training, most models still maintain an error rate over 70 percent on DAmageNet. DAmageNet is the first universal adversarial dataset. It could be downloaded freely and serve as a benchmark for robustness testing and adversarial training.

Authors

  • Sizhe Chen
  • Zhengbao He
  • Chengjin Sun
  • Jie Yang
    Key Laboratory of Development and Maternal and Child Diseases of Sichuan Province, Department of Pediatrics, Sichuan University, Chengdu, China.
  • Xiaolin Huang
    Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, 200240, Shanghai, P.R. China.