Personalized response generation by Dual-learning based domain adaptation.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Open-domain conversation is one of the most challenging artificial intelligence problems, which involves language understanding, reasoning, and the utilization of common sense knowledge. The goal of this paper is to further improve the response generation, using personalization criteria. We propose a novel method called PRGDDA (Personalized Response Generation by Dual-learning based Domain Adaptation) which is a personalized response generation model based on theories of domain adaptation and dual learning. During the training procedure, PRGDDA first learns the human responding style from large general data (without user-specific information), and then fine-tunes the model on a small size of personalized data to generate personalized conversations with a dual learning mechanism. We conduct experiments to verify the effectiveness of the proposed model on two real-world datasets in both English and Chinese. Experimental results show that our model can generate better personalized responses for different users.

Authors

  • Min Yang
    College of Food Science and Engineering, Ocean University of China, Qingdao, 266003, Shandong, China.
  • Wenting Tu
    School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai, China. Electronic address: tu.wenting@mail.shufe.edu.cn.
  • Qiang Qu
    Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. Electronic address: qiang@siat.ac.cn.
  • Zhou Zhao
    School of Computing Science, Zhejiang University, Hangzhou, China. Electronic address: zhouzhao@zju.edu.cn.
  • Xiaojun Chen
    Department of Gynecology, Obstetrics and Gynecology Hospital of Fudan University, Shanghai, China.
  • Jia Zhu
    School of Computer Science, South China Normal University, Guangzhou, China. Electronic address: jzhu@m.scnu.edu.cn.