Deep ReLU neural networks in high-dimensional approximation.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the Hölder-Zygmund space of mixed smoothness defined on the d-dimensional unit cube when the dimension d may be very large. The approximation error is measured in the norm of isotropic Sobolev space. For every function f from the Hölder-Zygmund space of mixed smoothness, we explicitly construct a deep ReLU neural network having an output that approximates f with a prescribed accuracy ɛ, and prove tight dimension-dependent upper and lower bounds of the computation complexity of the approximation, characterized as the size and depth of this deep ReLU neural network, explicitly in d and ɛ. The proof of these results in particular, relies on the approximation by sparse-grid sampling recovery based on the Faber series.

Authors

  • Dinh Dũng
    Information Technology Institute, Vietnam National University, Hanoi 144 Xuan Thuy, Cau Giay, Hanoi, Viet Nam. Electronic address: dinhzung@gmail.com.
  • Van Kien Nguyen
    Faculty of Basic Sciences, University of Transport and Communications, No.3 Cau Giay Street, Lang Thuong Ward, Dong Da District, Hanoi, Viet Nam. Electronic address: kiennv@utc.edu.vn.