AIMC Topic: Learning

Clear Filters Showing 111 to 120 of 1396 articles

A Self-Driven GaO Memristor Synapse for Humanoid Robot Learning.

Small methods
In recent years, the rapid development of brain-inspired neuromorphic systems has created an imperative demand for artificial photonic synapses that operate with low power consumption. In this study, a self-driven memristor synapse based on gallium o...

Demystifying unsupervised learning: how it helps and hurts.

Trends in cognitive sciences
Humans and machines rarely have access to explicit external feedback or supervision, yet manage to learn. Most modern machine learning systems succeed because they benefit from unsupervised data. Humans are also expected to benefit and yet, mysteriou...

Learning to segment self-generated from externally caused optic flow through sensorimotor mismatch circuits.

Neural networks : the official journal of the International Neural Network Society
Efficient sensory detection requires the capacity to ignore task-irrelevant information, for example when optic flow patterns created by egomotion need to be disentangled from object perception. To investigate how this is achieved in the visual syste...

Human-to-Robot Handover Based on Reinforcement Learning.

Sensors (Basel, Switzerland)
This study explores manipulator control using reinforcement learning, specifically targeting anthropomorphic gripper-equipped robots, with the objective of enhancing the robots' ability to safely exchange diverse objects with humans during human-robo...

Operant Conditioning Neuromorphic Circuit With Addictiveness and Time Memory for Automatic Learning.

IEEE transactions on biomedical circuits and systems
Most operant conditioning circuits predominantly focus on simple feedback process, few studies consider the intricacies of feedback outcomes and the uncertainty of feedback time. This paper proposes a neuromorphic circuit based on operant conditionin...

A neural network model of differentiation and integration of competing memories.

eLife
What determines when neural representations of memories move together (integrate) or apart (differentiate)? Classic supervised learning models posit that, when two stimuli predict similar outcomes, their representations should integrate. However, the...

Quo vadis, planning?

The Behavioral and brain sciences
Deep meta-learning is the driving force behind advances in contemporary AI research, and a promising theory of flexible cognition in natural intelligence. We agree with Binz et al. that many supposedly "model-based" behaviours may be better explained...

Meta-learning as a bridge between neural networks and symbolic Bayesian models.

The Behavioral and brain sciences
Meta-learning is even more broadly relevant to the study of inductive biases than Binz et al. suggest: Its implications go beyond the extensions to rational analysis that they discuss. One noteworthy example is that meta-learning can act as a bridge ...

The hard problem of meta-learning is what-to-learn.

The Behavioral and brain sciences
Binz et al. highlight the potential of meta-learning to greatly enhance the flexibility of AI algorithms, as well as to approximate human behavior more accurately than traditional learning methods. We wish to emphasize a basic problem that lies under...

Efficient visual representations for learning and decision making.

Psychological review
The efficient representation of visual information is essential for learning and decision making due to the complexity and uncertainty of the world, as well as inherent constraints on the capacity of cognitive systems. We hypothesize that biological ...