On the Universal Approximation Properties of Deep Neural Networks Using MAM Neurons.

Journal: IEEE transactions on pattern analysis and machine intelligence
Published Date:

Abstract

As neural networks are trained to perform tasks of increasing complexity, their size increases, which presents several challenges in their deployment on devices with limited resources. To cope with this, a recently proposed approach hinges on substituting the classical Multiply-and-ACcumulate (MAC) neurons in the hidden layers with other neurons called Multiply-And-Max/min (MAM) whose selective behavior helps identify important interconnections, thus allowing aggressive pruning of the others. Hybrid MAM&MAC structures promise a 10x or even 100x reduction in their memory footprint compared to what can be obtained by pruning MAC-only structures. However, a cornerstone of maintaining this promise is the assumption that MAC&MAM architectures have the same expressive power as MAC-only ones. To concretize such a cornerstone, we take here a step in the theoretical characterization of the capabilities of mixed MAM&MAC networks. We prove, with two theorems, that two hidden MAM layers followed by a MAC neuron with possibly a normalization stage is a universal approximator.

Authors

  • Philippe Bich
  • Andriy Enttsel
  • Luciano Prono
  • Alex Marchioni
  • Fabio Pareschi
  • Mauro Mangia
  • Gianluca Setti
  • Riccardo Rovatti

Keywords

No keywords available for this article.