Learning the generative principles of a symbol system from limited examples.

Journal: Cognition
Published Date:

Abstract

The processes and mechanisms of human learning are central to inquiries in a number of fields including psychology, cognitive science, development, education, and artificial intelligence. Arguments, debates, and controversies linger over the questions of human learning with one of the most contentious being whether simple associative processes could explain human children's prodigious learning, and in doing so, could lead to artificial intelligence that parallels human learning. One phenomenon at the center of these debates concerns a form of far generalization, sometimes referred to as "generative learning", because the learner's behavior seems to reflect more than co-occurrences among specifically experienced instances and to be based on principles through which new instances may be generated. In two experimental studies (N = 148) of preschool children's learning of how multi-digit number names map to their written forms and in a computational modeling experiment using a deep learning neural network, we show that data sets with a suite of inter-correlated imperfect predictive components yield far and systematic generalizations that accord with generative principles and do so despite limited examples and exceptions in the training data. Implications for human cognition, cognitive development, education, and machine learning are discussed.

Authors

  • Lei Yuan
    Department of Pharmacy, Baodi People's Hospital, Tianjin, China.
  • Violet Xiang
    School of Informatics, Computing, and Engineering, Indiana University, United States of America.
  • David Crandall
    School of Informatics, Computing, and Engineering, Indiana University, United States of America.
  • Linda Smith
    Department of Psychological and Brain Sciences, Indiana University, United States of America; School of Psychology, University of East Anglia, United Kingdom of Great Britain and Northern Ireland.