Inductive biases of neural network modularity in spatial navigation.

Journal: Science advances
PMID:

Abstract

The brain may have evolved a modular architecture for daily tasks, with circuits featuring functionally specialized modules that match the task structure. We hypothesize that this architecture enables better learning and generalization than architectures with less specialized modules. To test this, we trained reinforcement learning agents with various neural architectures on a naturalistic navigation task. We found that the modular agent, with an architecture that segregates computations of state representation, value, and action into specialized modules, achieved better learning and generalization. Its learned state representation combines prediction and observation, weighted by their relative uncertainty, akin to recursive Bayesian estimation. This agent's behavior also resembles macaques' behavior more closely. Our results shed light on the possible rationale for the brain's modularity and suggest that artificial systems can use this insight from neuroscience to improve learning and generalization in natural tasks.

Authors

  • Ruiyi Zhang
    Tandon School of Engineering, New York University, New York, NY, USA.
  • Xaq Pitkow
    Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Center for Neuroscience and Artificial Intelligence, BCM, Houston, TX, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.
  • Dora E Angelaki
    Tandon School of Engineering, New York University, New York, NY, USA.