Application of Deep Compression Technique in Spiking Neural Network Chip.

Journal: IEEE transactions on biomedical circuits and systems
Published Date:

Abstract

In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/([Formula: see text]) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.

Authors

  • Yanchen Liu
  • Kun Qian
    Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, Beijing, China.
  • Shaogang Hu
  • Kun An
  • Sheng Xu
    School of Physics and Information Engineering, Jiangsu Second Normal University, Nanjing, 211200, China.
  • Xitong Zhan
  • J J Wang
  • Rui Guo
    College of Chemistry&Chemical Engineering, Xiamen University, Xiamen 361005, China.
  • Yuancong Wu
  • Tu-Pei Chen
  • Qi Yu
    Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, China.
  • Yang Liu
    Department of Computer Science, Hong Kong Baptist University, Hong Kong, China.