Long short-term memory with activation on gradient.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

As the number of long short-term memory (LSTM) layers increases, vanishing/exploding gradient problems exacerbate and have a negative impact on the performance of the LSTM. In addition, the ill-conditioned problem occurs in the training process of LSTM and adversely affects its convergence. In this work, a simple and effective method of the gradient activation is applied to the LSTM, while empirical criteria for choosing gradient activation hyperparameters are found. Activating the gradient refers to modifying the gradient with a specific function named the gradient activation function. Moreover, different activation functions and different gradient operations are compared to prove that the gradient activation is effective on LSTM. Furthermore, comparative experiments are conducted, and their results show that the gradient activation alleviates the above problems and accelerates the convergence of the LSTM. The source code is publicly available at https://github.com/LongJin-lab/ACT-In-NLP.

Authors

  • Chuan Qin
    Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
  • Liangming Chen
    Chongqing Key Laboratory of Big Data and Intelligent Computing, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China.
  • Zangtai Cai
    The State Key Laboratory of Tibetan Intelligent Information Processing and Application, Qinghai Normal University, Xining 810008, China.
  • Mei Liu
    Department of Internal Medicine, Division of Medical Informatics, University of Kansas Medical Center, Kansas City, Missouri, USA.
  • Long Jin