The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks.

Journal: Neuron
Published Date:

Abstract

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

Authors

  • Brian DePasquale
    Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, New York, USA.
  • David Sussillo
    Department of Electrical Engineering and Neurosciences Program, Stanford University, Stanford, California, USA.
  • L F Abbott
    Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, New York, USA.
  • Mark M Churchland
    Department of Neuroscience, Grossman Center for the Statistics of Mind, David Mahoney Center for Brain and Behavior Research, Kavli Institute for Brain Science, Columbia University Medical Center, New York, New York, USA.