Estimates on compressed neural networks regression.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

Authors

  • Yongquan Zhang
    Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China. Electronic address: zyqmath@163.com.
  • Youmei Li
    Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China.
  • Jianyong Sun
    School of Engineering, University of Greenwich, Central Avenue, Chatham Maritime, Kent ME4 4TB, UK.
  • Jiabing Ji
    Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China.