Statistical learning of parts and wholes: A neural network approach.

Journal: Journal of experimental psychology. General
PMID:

Abstract

Statistical learning is often considered to be a means of discovering the units of perception, such as words and objects, and representing them as explicit "chunks." However, entities are not undifferentiated wholes but often contain parts that contribute systematically to their meanings. Studies of incidental auditory or visual statistical learning suggest that, as participants learn about wholes they become insensitive to parts embedded within them, but this seems difficult to reconcile with a broad range of findings in which parts and wholes work together to contribute to behavior. Bayesian approaches provide a principled description of how parts and wholes can contribute simultaneously to performance, but are generally not intended to model the computations that actually give rise to this performance. In the current work, we develop an account based on learning in artificial neural networks in which the representation of parts and wholes is a matter of degree, and the extent to which they cooperate or compete arises naturally through incidental learning. We show that the approach accounts for a wide range of findings concerning the relationship between parts and wholes in auditory and visual statistical learning, including some findings previously thought to be problematic for neural network approaches. (PsycINFO Database Record

Authors

  • David C Plaut
    Department of Psychology and the Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, USA. Electronic address: plaut@cmu.edu.
  • Anna K Vande Velde
    Department of Psychology.