BERTtoCNN: Similarity-preserving enhanced knowledge distillation for stance detection.

Journal: PloS one
Published Date:

Abstract

In recent years, text sentiment analysis has attracted wide attention, and promoted the rise and development of stance detection research. The purpose of stance detection is to determine the author's stance (favor or against) towards a specific target or proposition in the text. Pre-trained language models like BERT have been proven to perform well in this task. However, in many reality scenes, they are usually very expensive in computation, because such heavy models are difficult to implement with limited resources. To improve the efficiency while ensuring the performance, we propose a knowledge distillation model BERTtoCNN, which combines the classic distillation loss and similarity-preserving loss in a joint knowledge distillation framework. On the one hand, BERTtoCNN provides an efficient distillation process to train a novel 'student' CNN structure from a much larger 'teacher' language model BERT. On the other hand, based on the similarity-preserving loss function, BERTtoCNN guides the training of a student network, so that input pairs with similar (dissimilar) activation in the teacher network have similar (dissimilar) activation in the student network. We conduct experiments and test the proposed model on the open Chinese and English stance detection datasets. The experimental results show that our model outperforms the competitive baseline methods obviously.

Authors

  • Yang Li
    Occupation of Chinese Center for Disease Control and Prevention, Beijing, China.
  • Yuqing Sun
    College of Information and Computer Engineering, Northeast Forestry University, Harbin, Heilongjiang, China.
  • Nana Zhu
    School of Manufacturing Science and Engineering, Key Laboratory of Testing Technology for Manufacturing Process, Ministry of Education, Southwest University of Science and Technology, Mianyang, China.