Hierarchical Learning of Statistical Regularities over Multiple Timescales of Sound Sequence Processing: A Dynamic Causal Modeling Study.

Journal: Journal of cognitive neuroscience
PMID:

Abstract

Our understanding of the sensory environment is contextualized on the basis of prior experience. Measurement of auditory ERPs provides insight into automatic processes that contextualize the relevance of sound as a function of how sequences change over time. However, task-independent exposure to sound has revealed that strong first impressions exert a lasting impact on how the relevance of sound is contextualized. Dynamic causal modeling was applied to auditory ERPs collected during presentation of alternating pattern sequences. A local regularity (a rare p = .125 vs. common p = .875 sound) alternated to create a longer timescale regularity (sound probabilities alternated regularly creating a predictable block length), and the longer timescale regularity changed halfway through the sequence (the regular block length became shorter or longer). Predictions should be revised for local patterns when blocks alternated and for longer patterning when the block length changed. Dynamic causal modeling revealed an overall higher precision for the error signal to the rare sound in the first block type, consistent with the first impression. The connectivity changes in response to errors within the underlying neural network were also different for the two blocks with significantly more revision of predictions in the arrangement that violated the first impression. Furthermore, the effects of block length change suggested errors within the first block type exerted more influence on the updating of longer timescale predictions. These observations support the hypothesis that automatic sequential learning creates a high-precision context (first impression) that impacts learning rates and updates to those learning rates when predictions arising from that context are violated. The results further evidence automatic pattern learning over multiple timescales simultaneously, even during task-independent passive exposure to sound.

Authors

  • Kaitlin Fitzgerald
    University of Newcastle, Callaghan, Australia.
  • Ryszard Auksztulewicz
    Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London WC1N 3BG, UK.
  • Alexander Provost
    University of Newcastle, Callaghan, Australia.
  • Bryan Paton
    University of Newcastle, Callaghan, Australia.
  • Zachary Howard
    University of Newcastle, Callaghan, Australia.
  • Juanita Todd
    University of Newcastle, Callaghan, Australia.