Adversarial and Random Transformations for Robust Domain Adaptation and Generalization.

Journal: Sensors (Basel, Switzerland)
PMID:

Abstract

Data augmentation has been widely used to improve generalization in training deep neural networks. Recent works show that using worst-case transformations or adversarial augmentation strategies can significantly improve accuracy and robustness. However, due to the non-differentiable properties of image transformations, searching algorithms such as reinforcement learning or evolution strategy have to be applied, which are not computationally practical for large-scale problems. In this work, we show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained. To further improve the accuracy and robustness with adversarial examples, we propose a differentiable adversarial data augmentation method based on spatial transformer networks (STNs). The combined adversarial and random-transformation-based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets. Furthermore, the proposed method shows desirable robustness to corruption, which is also validated on commonly used datasets.

Authors

  • Liang Xiao
  • Jiaolong Xu
    Unmanned Systems Technology Research Center, Defense Innovation Institute, Beijing 100071, China.
  • Dawei Zhao
    Shandong Youth University of Political Science, Jinan, Shandong 250103, China.
  • Erke Shang
    Unmanned Systems Technology Research Center, Defense Innovation Institute, Beijing 100071, China.
  • Qi Zhu
    Medical Research Center, Southwestern Hospital, Army Medical University, Chongqing 400037, P.R. China.
  • Bin Dai
    Unmanned Systems Technology Research Center, Defense Innovation Institute, Beijing 100071, China.