Neuro-symbolic procedural semantics for explainable visual dialogue.

Journal: PloS one
Published Date:

Abstract

This paper introduces a novel approach to visual dialogue that is based on neuro-symbolic procedural semantics. The approach builds further on earlier work on procedural semantics for visual question answering and expands it with neuro-symbolic mechanisms that handle the challenges that are inherent to dialogue, in particular the incremental nature of the information that is conveyed. Concretely, we introduce (i) the use of a conversation memory as a data structure that explicitly and incrementally represents the information that is expressed during the subsequent turns of a dialogue, and (ii) the design of a neuro-symbolic procedural semantic representation that is grounded in both visual input and the conversation memory. We validate the methodology using the MNIST Dialog and CLEVR-Dialog benchmark challenges and achieve a question-level accuracy of 99.8% and 99.2% respectively. The methodology presented in this paper contributes to the growing body of research in artificial intelligence that tackles tasks that involve both low-level perception and high-level reasoning using a combination of neural and symbolic techniques. It thereby leads the way towards the development of conversational agents that will be able to hold more explainable, natural and coherent conversations with their human interlocutors.

Authors

  • Lara Verheyen
    Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium.
  • Jérôme Botoko Ekila
    Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium.
  • Jens Nevens
    Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium.
  • Paul Van Eecke
    Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium.
  • Katrien Beuls
    Faculté d'informatique, Université de Namur, Namur, Belgium.