Bridged adversarial training.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Adversarial robustness is considered a required property of deep neural networks. In this study, we discover that adversarially trained models might have significantly different characteristics in terms of margin and smoothness, even though they show similar robustness. Inspired by the observation, we investigate the effect of different regularizers and discover the negative effect of the smoothness regularizer on maximizing the margin. Based on the analyses, we propose a new method called bridged adversarial training that mitigates the negative effect by bridging the gap between clean and adversarial examples. We provide theoretical and empirical evidence that the proposed method provides stable and better robustness, especially for large perturbations.

Authors

  • Hoki Kim
    Institute of Engineering Research, Seoul National University, Gwanak-gu 08826, Republic of Korea.
  • Woojin Lee
    Industrial Engineering, Seoul National University, 1 Gwanakro, Gwanak-gu, Seoul 08826, Republic of Korea.
  • Sungyoon Lee
    Department of Computer Science, Hanyang University, Seongdong-gu 04763, Republic of Korea.
  • Jaewook Lee
    Department of Industrial Engineering, Seoul National University, Seoul, South Korea.