Multi-source sequential knowledge regression by using transfer RNN units.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Transfer learning has achieved a lot of success in deep neural networks to reuse useful knowledge from source domains. However, most of the existing transfer learning strategies on neural networks are for classification tasks or based on simple training strategies, which have limited use in multi-source knowledge regression due to the ineffectiveness of learning common latent features and source information loss in regression. In this paper, we propose transferable Recurrent Neural Network (RNN) units on the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) to adapt source knowledge in multi-source regression scenarios. Specifically, two knowledge adaptation methods are proposed, the first one utilizes similarity weights as the transfer coefficients of each source, and the other defines a transfer-gate to control the flow of source knowledge. By using the proposed methods, useful source knowledge embedded in both internal state and output is adapted. Extensive experiments on both synthetic data and human motion prediction tasks on the Human 3.6M dataset demonstrate the superiority of our transfer RNN units compared with conventional models.

Authors

  • Xiurui Xie
    Department of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731, Chengdu, Sichuan, China.
  • Guisong Liu
    Department of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731, Chengdu, Sichuan, China.
  • Qing Cai
  • Pengfei Wei
    School of Computing, National University of Singapore, Singapore. Electronic address: dcsweip@nus.edu.sg.
  • Hong Qu
    Center for Bioinformatics, State Key Laboratory of Protein and Plant Gene Research, College of Life Sciences, Peking University, Beijing 100871, China.