Multitype view of knowledge contrastive learning for recommendation.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Graph Neural Networks (GNNs) are playing an increasingly vital role in the field of recommender systems. To improve knowledge perception within GNNs, contrastive learning has been applied and has proven to be highly effective. GNNs have the ability to aggregate diverse knowledge and integrate topologies, while contrastive learning seeks supervisory signals from the model data. The combination of GNNs and contrastive learning can improve recommendations. However, thoughtless or incomplete contrastive learning settings limit the effectiveness of GNNs-based recommender systems in learning knowledge from knowledge and interaction graphs. To better exploit the valuable information within knowledge graphs, we propose a novel multitype view of knowledge contrastive learning for recommendations (MVKC) model. The MVKC model generates hierarchical views and augmented views in two modules, performing cross-hierarchical-view and cross-augmented-view contrastive learning and mining graph features in a self-supervised manner. The hierarchical views consist of global and local parts at multiple levels, while the augmented views are fused from the augmented knowledge graph and augmented interaction graph in our augmented processing. These features allow the MVKC model to alleviate the sparsity of user-item interaction graphs, suppress knowledge graph noise, and filter long-tail entities, which has been proven extremely important for a recommendation. The MVKC model also has strong anti-interference ability and robustness, which is crucial for a well-established model. Our experiments with three public datasets demonstrate that the MVKC model outperforms current state-of-the-art methods.

Authors

  • Xiao-Jun Yang
    Guangdong University of Technology, Guangzhou 510006, China; Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education of China, Guangdong University of Technology, Guangzhou 510006, China; Peng Cheng Laboratory, Shenzhen 518055, China. Electronic address: yangxj18@gdut.edu.cn.
  • Yang-Hui Wu
    Guangdong University of Technology, Guangzhou 510006, China. Electronic address: 535476987@qq.com.
  • Zhi-Hao Zhang
    The school of Integrated Circuits, Guangdong University of Technology, Guangzhou 510006, China. Electronic address: zhihaozhang@gdut.edu.cn.
  • Jing Wang
    Endoscopy Center, Peking University Cancer Hospital and Institute, Beijing, China.
  • Fei-Ping Nie
    The School of Computer Science and the Center for Optical Imagery Analysis and Learning, Northwestern Polytechnical University, Xi'an 710072, China. Electronic address: feipingnie@gmail.com.