Latent variable sequence identification for cognitive models with neural network estimators.

Journal: Behavior research methods
Published Date:

Abstract

Extracting time-varying latent variables from computational cognitive models plays a key role in uncovering the dynamic cognitive processes that drive behaviors. However, existing methods are limited to inferring latent variable sequences in a relatively narrow class of cognitive models. For example, a broad class of relevant cognitive models with intractable likelihood is currently out of reach of standard techniques, based on maximum a posteriori parameter estimation. Here, we present a simulation-based approach that leverages recurrent neural networks to map experimental data directly to the targeted latent variable space. We first show in simulations that our approach achieves competitive performance in inferring latent variable sequences in both likelihood-tractable and intractable models. We then demonstrate its applicability in real world datasets. Furthermore, the approach is practical to standard-size, individual data, generalizable across different computational models, and adaptable for continuous and discrete latent spaces. Our work underscores that combining recurrent neural networks and simulated data to identify model latent variable sequences broadens the scope of cognitive models researchers can explore, enabling testing a wider range of theories.

Authors

  • Ti-Fen Pan
    Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America.
  • Jing-Jing Li
  • Bill Thompson
    Department of Psychology, University of California, Berkeley, USA.
  • Anne Ge Collins
    Department of Psychology, University of California, Berkeley, USA.