A review of computational models of basic rule learning: The neural-symbolic debate and beyond.

Journal: Psychonomic bulletin & review
Published Date:

Abstract

We present a critical review of computational models of generalization of simple grammar-like rules, such as ABA and ABB. In particular, we focus on models attempting to account for the empirical results of Marcus et al. (Science, 283(5398), 77-80 1999). In that study, evidence is reported of generalization behavior by 7-month-old infants, using an Artificial Language Learning paradigm. The authors fail to replicate this behavior in neural network simulations, and claim that this failure reveals inherent limitations of a whole class of neural networks: those that do not incorporate symbolic operations. A great number of computational models were proposed in follow-up studies, fuelling a heated debate about what is required for a model to generalize. Twenty years later, this debate is still not settled. In this paper, we review a large number of the proposed models. We present a critical analysis of those models, in terms of how they contribute to answer the most relevant questions raised by the experiment. After identifying which aspects require further research, we propose a list of desiderata for advancing our understanding on generalization.

Authors

  • Raquel G Alhama
    Language Development Department, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525, XD Nijmegen, The Netherlands. rgalhama@mpi.nl.
  • Willem Zuidema
    Institute for Logic, Language and Computation, University of Amsterdam, Science Park 107, 1098, XG Amsterdam, The Netherlands.