Responses of midbrain auditory neurons to two different environmental sounds-A new approach on cross-sound modeling.

Journal: Bio Systems
PMID:

Abstract

When modeling auditory responses to environmental sounds, results are satisfactory if both training and testing are restricted to datasets of one type of sound. To predict 'cross-sound' responses (i.e., to predict the response to one type of sound e.g., rat Eating sound, after training with another type of sound e.g., rat Drinking sound), performance is typically poor. Here we implemented a novel approach to improve such cross-sound modeling (single unit datasets were collected at the auditory midbrain of anesthetized rats). The method had two key features: (a) population responses (e.g., average of 32 units) instead of responses of individual units were analyzed; and (b) the long sound segment was first divided into short segments (single sound-bouts), their similarity was then computed over a new metric involving the response (called Stimulus Response Model map or SRM map), and finally similar sound-bouts (regardless of sound type) and their associated responses (peri-stimulus time histograms, PSTHs) were modelled. Specifically, a committee machine model (artificial neural networks with 20 stratified spectral inputs) was trained with datasets from one sound type before predicting PSTH responses to another sound type. Model performance was markedly improved up to 92%. Results also suggested the involvement of different neural mechanisms in generating the early and late responses to amplitude transients in the broad-band environmental sounds. We concluded that it is possible to perform rather satisfactory cross-sound modeling on datasets grouped together based on their similarities in terms of the new metric of SRM map.

Authors

  • T R Chang
    Department of Computer Science and Information Engineering, Southern Taiwan University of Science and Technology, Tainan, Taiwan, ROC.
  • D Ĺ uta
    Department of Cognitive Systems and Neurosciences, Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University, Prague, Czech Republic; Department of Auditory Neuroscience, Academy of Sciences of the Czech Republic, Czech Republic.
  • T W Chiu
    Department of Biological Science and Technology, National Chiao-Tung University, Hsinchu, Taiwan, ROC; Center For Intelligent Drug Systems and Smart Bio-devices (IDS2B), National Chiao-Tung University, Hsinchu, Taiwan, ROC. Electronic address: twchiu@g2.nctu.edu.tw.