AI Medical Compendium Journal:
Cognitive science

Showing 1 to 10 of 40 articles

Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations.

Cognitive science
Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have r...

Too Much of a Good Thing: How Novelty Biases and Vocabulary Influence Known and Novel Referent Selection in 18-Month-Old Children and Associative Learning Models.

Cognitive science
Identifying the referent of novel words is a complex process that young children do with relative ease. When given multiple objects along with a novel word, children select the most novel item, sometimes retaining the word-referent link. Prior work i...

Representing, Running, and Revising Mental Models: A Computational Model.

Cognitive science
People use commonsense science knowledge to flexibly explain, predict, and manipulate the world around them, yet we lack computational models of how this commonsense science knowledge is represented, acquired, utilized, and revised. This is an import...

Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development.

Cognitive science
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models...

Incremental Bayesian Category Learning From Natural Language.

Cognitive science
Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., ...

Biologically Plausible, Human-Scale Knowledge Representation.

Cognitive science
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, ), "mesh" binding (van der Velde & de Kamps, ), and conjunctive...

Learning Orthographic Structure With Sequential Generative Neural Networks.

Cognitive science
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though ...

Multiscale Modeling of Gene-Behavior Associations in an Artificial Neural Network Model of Cognitive Development.

Cognitive science
In the multidisciplinary field of developmental cognitive neuroscience, statistical associations between levels of description play an increasingly important role. One example of such associations is the observation of correlations between relatively...

State-trace analysis: dissociable processes in a connectionist network?

Cognitive science
Some argue the common practice of inferring multiple processes or systems from a dissociation is flawed (Dunn, 2003). One proposed solution is state-trace analysis (Bamber, 1979), which involves plotting, across two or more conditions of interest, pe...

Do lemmas speak German? A verb position effect in German structural priming.

Cognitive science
Lexicalized theories of syntax often assume that verb-structure regularities are mediated by lemmas, which abstract over variation in verb tense and aspect. German syntax seems to challenge this assumption, because verb position depends on tense and ...