Modality-independent representations of small quantities based on brain activation patterns.

Journal: Human brain mapping
Published Date:

Abstract

Machine learning or MVPA (Multi Voxel Pattern Analysis) studies have shown that the neural representation of quantities of objects can be decoded from fMRI patterns, in cases where the quantities were visually displayed. Here we apply these techniques to investigate whether neural representations of quantities depicted in one modality (say, visual) can be decoded from brain activation patterns evoked by quantities depicted in the other modality (say, auditory). The main finding demonstrated, for the first time, that quantities of dots were decodable by a classifier that was trained on the neural patterns evoked by quantities of auditory tones, and vice-versa. The representations that were common across modalities were mainly right-lateralized in frontal and parietal regions. A second finding was that the neural patterns in parietal cortex that represent quantities were common across participants. These findings demonstrate a common neuronal foundation for the representation of quantities across sensory modalities and participants and provide insight into the role of parietal cortex in the representation of quantity information.

Authors

  • Saudamini Roy Damarla
    Department of Psychology, Center for Cognitive Brain Imaging, Carnegie Mellon University, Pittsburgh, Pennsylvania.
  • Vladimir L Cherkassky
    Department of Psychology, Center for Cognitive Brain Imaging, Carnegie Mellon University, Pittsburgh, Pennsylvania.
  • Marcel Adam Just
    Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA.