Random synaptic feedback weights support error backpropagation for deep learning.

Journal: Nature communications
Published Date:

Abstract

The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.

Authors

  • Timothy P Lillicrap
    Department of Pharmacology, University of Oxford, Oxford OX1 3QT, UK.
  • Daniel Cownden
    School of Biology, University of St Andrews, Harold Mitchel Building, St Andrews, Fife KY16 9TH, UK.
  • Douglas B Tweed
    Departments of Physiology and Medicine, University of Toronto, Toronto, Ontario M5S 1A8, Canada.
  • Colin J Akerman
    Department of Pharmacology, University of Oxford, Oxford OX1 3QT, UK.