Unsupervised post-training learning in spiking neural networks.
Journal:
Scientific reports
Published Date:
May 21, 2025
Abstract
The human brain is a dynamic system that is constantly learning. It employs a combination of various learning strategies to facilitate complex learning processes. However, implementing biological learning mechanisms into Spiking Neural Networks (SNNs) remains challenging; thus, most SNNs are trained with only a single learning strategy such as spike timing dependent plasticity (STDP). Moreover, conventional neural networks are first trained on one dataset and subsequently evaluated on unseen data. In this traditional approach, the weights and structure of the model remain fixed once the training step concludes. In this research, we aim to modify this traditional approach and hypothesize that adding short-term plasticity (STP) to a trained SNN enables the model to learn post-training without changing synaptic weights. In particular, by combining triplet STDP for long-term learning during initial training and STP for short-term learning after training (post-training), we employ multiple learning rules to enhance the biological plausibility and computational abilities of SNNs. In this way, two unsupervised learning pipelines are designed for image classification as a proof of concept, in which the dynamic synapse model, driven by neurotransmitter release and synaptic strength, is integrated into the trained network. The proposed method outperforms traditional training by achieving higher classification accuracy and a faster convergence rate. Consequently, our results show that the concept of post-training learning can be realized by incorporating STP in SNNs. Future studies should extend this concept to other challenges and explore its applicability to new datasets.