Training much deeper spiking neural networks with a small number of time-steps.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Spiking Neural Network (SNN) is a promising energy-efficient neural architecture when implemented on neuromorphic hardware. The Artificial Neural Network (ANN) to SNN conversion method, which is the most effective SNN training method, has successfully converted moderately deep ANNs to SNNs with satisfactory performance. However, this method requires a large number of time-steps, which hurts the energy efficiency of SNNs. How to effectively covert a very deep ANN (e.g., more than 100 layers) to an SNN with a small number of time-steps remains a difficult task. To tackle this challenge, this paper makes the first attempt to propose a novel error analysis framework that takes both the "quantization error" and the "deviation error" into account, which comes from the discretization of SNN dynamicsthe neuron's coding scheme and the inconstant input currents at intermediate layers, respectively. Particularly, our theories reveal that the "deviation error" depends on both the spike threshold and the input variance. Based on our theoretical analysis, we further propose the Threshold Tuning and Residual Block Restructuring (TTRBR) method that can convert very deep ANNs (>100 layers) to SNNs with negligible accuracy degradation while requiring only a small number of time-steps. With very deep networks, our TTRBR method achieves state-of-the-art (SOTA) performance on the CIFAR-10, CIFAR-100, and ImageNet classification tasks.

Authors

  • Qingyan Meng
    The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China. Electronic address: qingyanmeng@link.cuhk.edu.cn.
  • Shen Yan
    Center for Data Science, Peking University, China. Electronic address: yanshen@pku.edu.cn.
  • Mingqing Xiao
    Department of Mathematics, Southern Illinois University, IL 62901, USA. Electronic address: mxiao@siu.edu.
  • Yisen Wang
    State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, China.
  • Zhouchen Lin
    School of Intelligence Science and Technology, Peking University, China.
  • Zhi-Quan Luo
    The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China. Electronic address: luozq@cuhk.edu.cn.