Fast convergence rates of deep neural networks for classification.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

We derive the fast convergence rates of a deep neural network (DNN) classifier with the rectified linear unit (ReLU) activation function learned using the hinge loss. We consider three cases for a true model: (1) a smooth decision boundary, (2) smooth conditional class probability, and (3) the margin condition (i.e., the probability of inputs near the decision boundary is small). We show that the DNN classifier learned using the hinge loss achieves fast rate convergences for all three cases provided that the architecture (i.e., the number of layers, number of nodes and sparsity) is carefully selected. An important implication is that DNN architectures are very flexible for use in various cases without much modification. In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the conditions that the noise exponent and margin exponent are large. Even though they are strong, we explain that these two conditions are not too absurd for image classification problems. To confirm our theoretical explanation, we present the results of a small numerical study conducted to compare the hinge loss and cross-entropy.

Authors

  • Yongdai Kim
    Department of Statistics and Department of Data Science, Seoul National University, Seoul 08826, Republic of Korea. Electronic address: ydkim0903@gmail.com.
  • Ilsang Ohn
    Department of Applied and Computational Mathematics and Statistics, The University of Notre Dame, Indiana 46530, USA.
  • Dongha Kim
    Department of Biochemistry, Konkuk University School of Medicine, Seoul, Korea.