From explanation to intervention: Interactive knowledge extraction from Convolutional Neural Networks used in radiology.

Journal: PloS one
Published Date:

Abstract

Deep Learning models such as Convolutional Neural Networks (CNNs) are very effective at extracting complex image features from medical X-rays. However, the limited interpretability of CNNs has hampered their deployment in medical settings as they failed to gain trust among clinicians. In this work, we propose an interactive framework to allow clinicians to ask what-if questions and intervene in the decisions of a CNN, with the aim of increasing trust in the system. The framework translates a layer of a trained CNN into a measurable and compact set of symbolic rules. Expert interactions with visualizations of the rules promote the use of clinically-relevant CNN kernels and attach meaning to the rules. The definition and relevance of the kernels are supported by radiomics analyses and permutation evaluations, respectively. CNN kernels that do not have a clinically-meaningful interpretation are removed without affecting model performance. By allowing clinicians to evaluate the impact of adding or removing kernels from the rule set, our approach produces an interpretable refinement of the data-driven CNN in alignment with medical best practice.

Authors

  • Kwun Ho Ngan
    giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK.
  • Esma Mansouri-Benssassi
    AffectiveHalo Ltd, Tom Morris Drive, St Andrews.KY16 8HS.
  • James Phelan
    Data Science Institute, City, University of London, London, United Kingdom.
  • Joseph Townsend
    Fujitsu Research of Europe Ltd, Slough, United Kingdom.
  • Artur d'Avila Garcez
    Data Science Institute, City, University of London, London, United Kingdom.