Schema formation in a neural population subspace underlies learning-to-learn in flexible sensorimotor problem-solving.

Journal: Nature neuroscience
PMID:

Abstract

Learning-to-learn, a progressive speedup of learning while solving a series of similar problems, represents a core process of knowledge acquisition that draws attention in both neuroscience and artificial intelligence. To investigate its underlying brain mechanism, we trained a recurrent neural network model on arbitrary sensorimotor mappings known to depend on the prefrontal cortex. The network displayed an exponential time course of accelerated learning. The neural substrate of a schema emerges within a low-dimensional subspace of population activity; its reuse in new problems facilitates learning by limiting connection weight changes. Our work highlights the weight-driven modifications of the vector field, which determines the population trajectory of a recurrent network and behavior. Such plasticity is especially important for preserving and reusing the learned schema in spite of undesirable changes of the vector field due to the transition to learning a new problem; the accumulated changes across problems account for the learning-to-learn dynamics.

Authors

  • Vishwa Goudar
    Center for Neural Science, New York University, New York, NY, USA.
  • Barbara Peysakhovich
    Department of Neurobiology, University of Chicago, Chicago, IL, USA.
  • David J Freedman
    Department of Neurobiology, The University of Chicago, Chicago, IL 60637, USA; Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, Chicago, IL 60637, USA.
  • Elizabeth A Buffalo
    Department of Physiology and Biophysics, University of Washington School of Medicine, Seattle, WA, USA.
  • Xiao-Jing Wang
    Center for Neural Science, New York University, New York, New York xjwang@nyu.edu.