Unsupervised Sentence Representation Learning with Frequency-induced Adversarial tuning and Incomplete sentence filtering.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Pre-trained Language Model (PLM) is nowadays the mainstay of Unsupervised Sentence Representation Learning (USRL). However, PLMs are sensitive to the frequency information of words from their pre-training corpora, resulting in anisotropic embedding space, where the embeddings of high-frequency words are clustered but those of low-frequency words disperse sparsely. This anisotropic phenomenon results in two problems of similarity bias and information bias, lowering the quality of sentence embeddings. To solve the problems, we fine-tune PLMs by leveraging the frequency information of words and propose a novel USRL framework, namely Sentence Representation Learning with Frequency-induced Adversarial tuning and Incomplete sentence filtering (Slt-fai). We calculate the word frequencies over the pre-training corpora of PLMs and assign words thresholding frequency labels. With them, (1) we incorporate a similarity discriminator used to distinguish the embeddings of high-frequency and low-frequency words, and adversarially tune the PLM with it, enabling to achieve uniformly frequency-invariant embedding space; and (2) we propose a novel incomplete sentence detection task, where we incorporate an information discriminator to distinguish the embeddings of original sentences and incomplete sentences by randomly masking several low-frequency words, enabling to emphasize the more informative low-frequency words. Our Slt-fai is a flexible and plug-and-play framework, and it can be integrated with existing USRL techniques. We evaluate Slt-fai with various backbones on benchmark datasets. Empirical results indicate that Slt-fai can be superior to the existing USRL baselines.

Authors

  • Bing Wang
    Computer Science & Engineering Department at the University of Connecticut.
  • Ximing Li
    Tianjin Cardiovascular Institute, Tianjin Chest Hospital, Tianjin, China.
  • Zhiyao Yang
    College of Computer Science and Technology, Jilin University, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, China.
  • Yuanyuan Guan
    School of Humanities, Jilin University, China.
  • Jiayin Li
    Faculty of Creative Arts, University of Malaya, Kuala Lumpur, 50603, Malaysia.
  • Shengsheng Wang
    College of Computer Science and Technology, Jilin University, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, China. Electronic address: wss@jlu.edu.cn.