Evolutionary learning in neural networks by heterosynaptic plasticity.

Journal: iScience
Published Date:

Abstract

Training biophysical neuron models provides insights into brain circuits' organization and problem-solving capabilities. Traditional training methods like backpropagation face challenges with complex models due to instability and gradient issues. We explore evolutionary algorithms (EAs) combined with heterosynaptic plasticity as a gradient-free alternative. Our EA models agents with distinct neuron information routes, evaluated via alternating gating, and guided by dopamine-driven plasticity. This model draws inspiration from various biological mechanisms, such as dopamine function, dendritic spine meta-plasticity, memory replay, and cooperative synaptic plasticity within dendritic neighborhoods. Neural networks trained with this model recapitulate brain-like dynamics during cognition. Our method effectively trains spiking and analog neural networks in both feedforward and recurrent architectures, it also achieves performance in tasks like MNIST classification and Atari games comparable to gradient-based methods. Overall, this research extends training approaches for biophysical neuron models, offering a robust alternative to traditional algorithms.

Authors

  • Zedong Bi
    Lingang Laboratory, Shanghai 200031, China.
  • Ruiqi Fu
  • Guozhang Chen
    School of Physics, University of Sydney, Sydney, NSW 2006, Australia.
  • Dongping Yang
    Research Institute of Artificial Intelligence, Zhejiang Lab, Hangzhou 311121, China.
  • Yu Zhou
    Department of Biospectroscopy, Leibniz-Institut für Analytische Wissenschaften - ISAS - e.V., Dortmund, Germany.
  • Liang Tian

Keywords

No keywords available for this article.