A rule- and query-guided reinforcement learning for extrapolation reasoning in temporal knowledge graphs.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Extrapolation reasoning in temporal knowledge graphs (TKGs) aims at predicting future facts based on historical data, and finds extensive application in diverse real-world scenarios. Existing TKG reasoning methods primarily focus on capturing the fact evolution to improve entity temporal representations, often overlooking the alignment with query semantics. More importantly, these methods fail to generate explicit inference paths, resulting in a lack of explainability. To address these challenges, we introduce LogiRL, a rule- and query-guided reinforcement learning framework for extrapolation reasoning over TKGs. Specifically, LogiRL innovatively designs a temporal logic rule-guided reward mechanism, steering RL agents toward actions that are consistent with established rules, thereby fostering the generation of explainable and logical reasoning paths. Furthermore, LogiRL adeptly integrates neighborhood information with query semantics, enriching the temporal representation of actions and significantly enhancing the precision of extrapolation reasoning. Comprehensive experiments conducted on four real-world datasets demonstrate the superiority of LogiRL over existing state-of-the-art models in extrapolation reasoning.

Authors

  • Tingxuan Chen
    School of Computer Science and Engineering, Central South University, Changsha, Hunan 410083, China. Electronic address: chentingxuan@csu.edu.cn.
  • Liu Yang
    Department of Ultrasound, Hunan Children's Hospital, Changsha, China.
  • Zidong Wang
    Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex, UB8 3PH, UK; Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia. Electronic address: zidong.wang@brunel.ac.uk.
  • Jun Long