Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron.

Journal: International journal of neural systems
Published Date:

Abstract

We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi-fully connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, nonleaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN.

Authors

  • Saeed Reza Kheradpisheh
    Department of Computer Science, Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran.
  • Timothée Masquelier
    CERCO UMR 5549, CNRS-Université de Toulouse 3, F-31300, France.