Concept learning through deep reinforcement learning with memory-augmented neural networks.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Deep neural networks have shown superior performance in many regimes to remember familiar patterns with large amounts of data. However, the standard supervised deep learning paradigm is still limited when facing the need to learn new concepts efficiently from scarce data. In this paper, we present a memory-augmented neural network which is motivated by the process of human concept learning. The training procedure, imitating the concept formation course of human, learns how to distinguish samples from different classes and aggregate samples of the same kind. In order to better utilize the advantages originated from the human behavior, we propose a sequential process, during which the network should decide how to remember each sample at every step. In this sequential process, a stable and interactive memory serves as an important module. We validate our model in some typical one-shot learning tasks and also an exploratory outlier detection problem. In all the experiments, our model gets highly competitive to reach or outperform those strong baselines.

Authors

  • Jing Shi
    The First Affiliated Hospital of China Medical University, Shenyang 110122, Liaoning Province, China.
  • Jiaming Xu
    Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, PR China.
  • Yiqun Yao
    Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, China; Research Center for Brain-inspired Intelligence, CASIA, China; University of Chinese Academy of Sciences, China.
  • Bo Xu
    State Key Laboratory of Cardiovascular Disease, Fuwai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100037, China.