A sparse quantized hopfield network for online-continual memory.

Journal: Nature communications
PMID:

Abstract

An important difference between brains and deep neural networks is the way they learn. Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed way. Further, synaptic plasticity in the brain depends only on information local to synapses. Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, independent, identically distributed setting. Understanding how neural networks learn under the same constraints as the brain is an open problem for neuroscience and neuromorphic computing. A standard approach to this problem has yet to be established. In this paper, we propose that discrete graphical models that learn via an online maximum a posteriori learning algorithm could provide such an approach. We implement this kind of model in a neural network called the Sparse Quantized Hopfield Network. We show our model outperforms state-of-the-art neural networks on associative memory tasks, outperforms these networks in online, continual settings, learns efficiently with noisy inputs, and is better than baselines on an episodic memory task.

Authors

  • Nicholas Alonso
    Department of Cognitive Science, University of California, Irvine, CA, USA. nalonso2@uci.edu.
  • Jeffrey L Krichmar
    Department of Computer Science, University of California Irvine, Irvine, CA, USA; Department of Cognitive Sciences, University of California Irvine, Irvine, CA, USA.