Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based a...
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be ...
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively in...
Neuroscience and biobehavioral reviews
Nov 10, 2015
For almost a century, Pavlovian conditioning is the imperative experimental paradigm to investigate the development and generalization of fear. However, despite the rich research tradition, the conceptualization of fear generalization has remained so...
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that ...
Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., ...
Neural networks : the official journal of the International Neural Network Society
Oct 29, 2015
Robotic mapping and localization systems typically operate at either one fixed spatial scale, or over two, combining a local metric map and a global topological map. In contrast, recent high profile discoveries in neuroscience have indicated that ani...
IEEE transactions on neural networks and learning systems
Oct 27, 2015
Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of ...
Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we ...
Neural networks : the official journal of the International Neural Network Society
Oct 19, 2015
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generali...