A manifesto on explainability for artificial intelligence in medicine.

Journal: Artificial intelligence in medicine
Published Date:

Abstract

The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine.

Authors

  • Carlo Combi
    Department of Computer Science, University of Verona, Ca'Vignal 2, strada le Grazie 15, 37134 Verona, Italy. Electronic address: carlo.combi@univr.it.
  • Beatrice Amico
    University of Verona, Verona, Italy.
  • Riccardo Bellazzi
    Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy.
  • Andreas Holzinger
    Human-Centered AI Lab, Medical University of Graz, Graz, Austria.
  • Jason H Moore
    University of Pennsylvania, Philadelphia, PA, USA.
  • Marinka Zitnik
    Department of Computer Science, Stanford University.
  • John H Holmes
    Department of Biostatistics, Epidemiology, and Informatics, Institute for Biomedical Informatics, University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, 19104 USA.