A rule- and query-guided reinforcement learning for extrapolation reasoning in temporal knowledge graphs.
Journal:
Neural networks : the official journal of the International Neural Network Society
PMID:
40055886
Abstract
Extrapolation reasoning in temporal knowledge graphs (TKGs) aims at predicting future facts based on historical data, and finds extensive application in diverse real-world scenarios. Existing TKG reasoning methods primarily focus on capturing the fact evolution to improve entity temporal representations, often overlooking the alignment with query semantics. More importantly, these methods fail to generate explicit inference paths, resulting in a lack of explainability. To address these challenges, we introduce LogiRL, a rule- and query-guided reinforcement learning framework for extrapolation reasoning over TKGs. Specifically, LogiRL innovatively designs a temporal logic rule-guided reward mechanism, steering RL agents toward actions that are consistent with established rules, thereby fostering the generation of explainable and logical reasoning paths. Furthermore, LogiRL adeptly integrates neighborhood information with query semantics, enriching the temporal representation of actions and significantly enhancing the precision of extrapolation reasoning. Comprehensive experiments conducted on four real-world datasets demonstrate the superiority of LogiRL over existing state-of-the-art models in extrapolation reasoning.