Higher-Order Explanations of Graph Neural Networks via Relevant Walks.

Journal: IEEE transactions on pattern analysis and machine intelligence
Published Date:

Abstract

Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e., by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.

Authors

  • Thomas Schnake
  • Oliver Eberle
  • Jonas Lederer
  • Shinichi Nakajima
    Machine Learning Group, Technische Universität Berlin, 10587 Berlin, Germany; BIFOLD - Berlin Institute for the Foundations of Learning and Data, Germany; RIKEN AIP, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. Electronic address: nakajima@tu-berlin.de.
  • Kristof T Schutt
  • Klaus-Robert Müller
    Berlin Institute for the Foundations of Learning and Data (BIFOLD), Berlin, Deutschland.
  • Grégoire Montavon
    Machine Learning Group, Technische Universität Berlin, Berlin, Germany.