Converting High-Performance and Low-Latency SNNs Through Explicit Modeling of Residual Error in ANNs.
Journal:
IEEE transactions on neural networks and learning systems
Published Date:
May 29, 2025
Abstract
Spiking neural networks (SNNs) have garnered interest due to their energy efficiency and superior effectiveness on neuromorphic chips compared with traditional artificial neural networks (ANNs). One of the mainstream approaches to implementing deep SNNs is the ANN-SNN conversion, which integrates the efficient training strategy of ANNs with the energy-saving potential and fast inference capability of SNNs. However, under extremely low-latency conditions, the existing conversion theory suggests that the problem of SNNs' neurons firing more or fewer spikes within each layer, i.e., residual error, leads to a performance gap in the converted SNNs compared with the original ANNs. This severely limits the possibility of the practical application of SNNs on delay-sensitive edge devices. Existing conversion methods addressing this problem usually involve modifying the state of the conversion spiking neurons. However, these methods do not consider their adaptability and compatibility with neuromorphic chips. We propose a new approach based on explicit modeling of residual errors as additive noise. The noise is incorporated into the activation function of the source ANN, effectively reducing the impact of residual error on SNN performance. Our experiments on the CIFAR10/100 and Tiny-ImageNet datasets verify that our approach exceeds the prevailing ANN-SNN conversion methods and directly trained SNNs concerning accuracy and the required time steps. Overall, our method provides new ideas for improving SNN performance under ultralow-latency conditions and is expected to promote practical neuromorphic hardware applications for further development. The code for our NQ framework is available at https://github.com/hzp2022/ANN2SNN_NQ.
Authors
Keywords
No keywords available for this article.