Efficient neural codes naturally emerge through gradient descent learning.

Journal: Nature communications
Published Date:

Abstract

Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradient descent in neural networks preferentially creates representations that are more sensitive to common features, a hallmark of efficient coding. This effect occurs in systems with otherwise unconstrained coding resources, and additionally when learning towards both supervised and unsupervised objectives. This result demonstrates that efficient codes can naturally emerge from gradient-like learning.

Authors

  • Ari S Benjamin
    Department of Bioengineering, University of Pennsylvania, United States. Electronic address: aarrii@seas.upenn.edu.
  • Ling-Qi Zhang
    Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
  • Cheng Qiu
    Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
  • Alan A Stocker
    Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
  • Konrad P Kording
    Departments of Bioengineering and Neuroscience,University of Pennsylvania,Philadelphia,PA 19104.kording@upenn.eduwww.kordinglab.com.