Learning to activate logic rules for textual reasoning.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the "Image Schema" in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts.

Authors

  • Yiqun Yao
    Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, China; Research Center for Brain-inspired Intelligence, CASIA, China; University of Chinese Academy of Sciences, China.
  • Jiaming Xu
    Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, PR China.
  • Jing Shi
    The First Affiliated Hospital of China Medical University, Shenyang 110122, Liaoning Province, China.
  • Bo Xu
    State Key Laboratory of Cardiovascular Disease, Fuwai Hospital, National Center for Cardiovascular Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100037, China.