Lifelong learning with Shared and Private Latent Representations learned through synaptic intelligence.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

This paper explores a novel lifelong learning method with Shared and Private Latent Representations (SPLR), which are learned through synaptic intelligence. To solve a sequence of tasks, by considering the entire parameter learning trajectory, SPLR can learn task-invariant representation which changes little, and task-specific features that change greatly along the entire parameter updating trajectory. Therefore, in the lifelong learning scenarios, our model can obtain a task-invariant structure shared by all tasks and also contain some private properties that are task-specific to each task. To reduce the parameter quantity, a ℓ regularization to promote sparsity is employed in the weights. We use multiple datasets under lifelong learning scenes to verify our SPLR, on these datasets it can get comparable performance compared with existing lifelong learning approaches, and learn a sparse network which means fewer parameters while requiring less model training time.

Authors

  • Yang Yang
    Department of Gastrointestinal Surgery, The Third Hospital of Hebei Medical University, Shijiazhuang, China.
  • Jie Huang
    Department of Critical Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China.
  • Dexiu Hu
    PLA Strategic Support Force Information Engineering University, Zhengzhou, Henan, 450001, China. Electronic address: paper_hdx@126.com.