Hierarchical Reinforcement Learning, Sequential Behavior, and the Dorsal Frontostriatal System.

Journal: Journal of cognitive neuroscience
Published Date:

Abstract

To effectively behave within ever-changing environments, biological agents must learn and act at varying hierarchical levels such that a complex task may be broken down into more tractable subtasks. Hierarchical reinforcement learning (HRL) is a computational framework that provides an understanding of this process by combining sequential actions into one temporally extended unit called an option. However, there are still open questions within the HRL framework, including how options are formed and how HRL mechanisms might be realized within the brain. In this review, we propose that the existing human motor sequence literature can aid in understanding both of these questions. We give specific emphasis to visuomotor sequence learning tasks such as the discrete sequence production task and the M × N (M steps × N sets) task to understand how hierarchical learning and behavior manifest across sequential action tasks as well as how the dorsal cortical-subcortical circuitry could support this kind of behavior. This review highlights how motor chunks within a motor sequence can function as HRL options. Furthermore, we aim to merge findings from motor sequence literature with reinforcement learning perspectives to inform experimental design in each respective subfield.

Authors

  • Miriam Janssen
    National Institute of Mental Health, Bethesda, MD.
  • Christopher LeWarne
    National Institute of Mental Health, Bethesda, MD.
  • Diana Burk
    National Institute of Mental Health, Bethesda, MD.
  • Bruno B Averbeck
    Laboratory of Neuropsychology, Section on Learning and Decision Making, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.