ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Despite recent success of deep learning models in numerous applications, their widespread use on mobile devices is seriously impeded by storage and computational requirements. In this paper, we propose a novel network compression method called Adaptive Dimension Adjustment Tucker decomposition (ADA-Tucker). With learnable core tensors and transformation matrices, ADA-Tucker performs Tucker decomposition of arbitrary-order tensors. Furthermore, we propose that weight tensors in networks with proper order and balanced dimension are easier to be compressed. Therefore, the high flexibility in decomposition choice distinguishes ADA-Tucker from all previous low-rank models. To compress more, we further extend the model to Shared Core ADA-Tucker (SCADA-Tucker) by defining a shared core tensor for all layers. Our methods require no overhead of recording indices of non-zero elements. Without loss of accuracy, our methods reduce the storage of LeNet-5 and LeNet-300 by ratios of 691× and 233 ×, respectively, significantly outperforming state of the art. The effectiveness of our methods is also evaluated on other three benchmarks (CIFAR-10, SVHN, ILSVRC12) and modern newly deep networks (ResNet, Wide-ResNet).

Authors

  • Zhisheng Zhong
    Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, PR China. Electronic address: zszhong@pku.edu.cn.
  • Fangyin Wei
    Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, PR China. Electronic address: weifangyin@pku.edu.cn.
  • Zhouchen Lin
    School of Intelligence Science and Technology, Peking University, China.
  • Chao Zhang
    School of Information Engineering, Suqian University, Suqian, Jiangsu, China.