Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.

Journal: European journal of radiology
Published Date:

Abstract

Driven by recent advances in Artificial Intelligence (AI) and Computer Vision (CV), the implementation of AI systems in the medical domain increased correspondingly. This is especially true for the domain of medical imaging, in which the incorporation of AI aids several imaging-based tasks such as classification, segmentation, and registration. Moreover, AI reshapes medical research and contributes to the development of personalized clinical care. Consequently, alongside its extended implementation arises the need for an extensive understanding of AI systems and their inner workings, potentials, and limitations which the field of eXplainable AI (XAI) aims at. Because medical imaging is mainly associated with visual tasks, most explainability approaches incorporate saliency-based XAI methods. In contrast to that, in this article we would like to investigate the full potential of XAI methods in the field of medical imaging by specifically focusing on XAI techniques not relying on saliency, and providing diversified examples. We dedicate our investigation to a broad audience, but particularly healthcare professionals. Moreover, this work aims at establishing a common ground for cross-disciplinary understanding and exchange across disciplines between Deep Learning (DL) builders and healthcare professionals, which is why we aimed for a non-technical overview. Presented XAI methods are divided by a method's output representation into the following categories: Case-based explanations, textual explanations, and auxiliary explanations.

Authors

  • Katarzyna Borys
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147 Essen, Germany. Electronic address: Katarzyna.Borys@uk-essen.de.
  • Yasmin Alyssa Schmitt
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany.
  • Meike Nauta
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany; Data Management & Biometrics Group, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands.
  • Christin Seifert
    Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, 45131 Essen, Germany.
  • Nicole Krämer
    Department of Social Psychology, Media and Communication, University of Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, Germany; Research Center "Trustworthy Data Science and Security", Otto-Hahn-Straße 14, 44227 Dortmund, Germany.
  • Christoph M Friedrich
    Department of Computer Science, University of Applied Sciences and Arts Dortmund, Dortmund, Germany.
  • Felix Nensa
    Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany.