Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions.

Journal: PLoS biology
PMID:

Abstract

Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.

Authors

  • Greta Tuckute
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139.
  • Jenelle Feather
    Center for Computational Neuroscience, Flatiron Institute, NY, USA; Center for Neural Science, New York University, NY, USA.
  • Dana Boebinger
    Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America.
  • Josh H McDermott
    Department of Brain and Cognitive Sciences, MIT, United States; Center for Brains, Minds, and Machines, United States; McGovern Institute for Brain Research, MIT, United States; Program in Speech and Hearing Biosciences and Technology, Harvard University, United States. Electronic address: jhm@mit.edu.