Attention modeled as information in learning multisensory integration.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Top-down cognitive processes affect the way bottom-up cross-sensory stimuli are integrated. In this paper, we therefore extend a successful previous neural network model of learning multisensory integration in the superior colliculus (SC) by top-down, attentional input and train it on different classes of cross-modal stimuli. The network not only learns to integrate cross-modal stimuli, but the model also reproduces neurons specializing in different combinations of modalities as well as behavioral and neurophysiological phenomena associated with spatial and feature-based attention. Importantly, we do not provide the model with any information about which input neurons are sensory and which are attentional. If the basic mechanisms of our model-self-organized learning of input statistics and divisive normalization-play a major role in the ontogenesis of the SC, then this work shows that these mechanisms suffice to explain a wide range of aspects both of bottom-up multisensory integration and the top-down influence on multisensory integration.

Authors

  • Johannes Bauer
    University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany. Electronic address: bauer@informatik.uni-hamburg.de.
  • Sven Magg
    University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany. Electronic address: magg@informatik.uni-hamburg.de.
  • Stefan Wermter
    University of Hamburg, Department of Informatics, Knowledge Technology, WTM, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany. Electronic address: wermter@informatik.uni-hamburg.de.