SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

We introduce SPLASH units, a class of learnable activation functions shown to simultaneously improve the accuracy of deep neural networks while also improving their robustness to adversarial attacks. SPLASH units have both a simple parameterization and maintain the ability to approximate a wide range of non-linear functions. SPLASH units are: (1) continuous; (2) grounded (f(0)=0); (3) use symmetric hinges; and (4) their hinges are placed at fixed locations which are derived from the data (i.e. no learning required). Compared to nine other learned and fixed activation functions, including ReLU and its variants, SPLASH units show superior performance across three datasets (MNIST, CIFAR-10, and CIFAR-100) and four architectures (LeNet5, All-CNN, ResNet-20, and Network-in-Network). Furthermore, we show that SPLASH units significantly increase the robustness of deep neural networks to adversarial attacks. Our experiments on both black-box and white-box adversarial attacks show that commonly-used architectures, namely LeNet5, All-CNN, Network-in-Network, and ResNet-20, can be up to 31% more robust to adversarial attacks by simply using SPLASH units instead of ReLUs. Finally, we show the benefits of using SPLASH activation functions in bigger architectures designed for non-trivial datasets such as ImageNet.

Authors

  • Mohammadamin Tavakoli
    Department of Computer Science, University of California, Irvine, United States of America. Electronic address: mohamadt@uci.edu.
  • Forest Agostinelli
    Department of Computer Science.
  • Pierre Baldi
    Department of Computer Science, Department of Biological Chemistry, University of California-Irvine, Irvine, CA 92697, USA.