Causability and explainability of artificial intelligence in medicine.

Journal: Wiley interdisciplinary reviews. Data mining and knowledge discovery
Published Date:

Abstract

Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black-box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use-case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction.

Authors

  • Andreas Holzinger
    Human-Centered AI Lab, Medical University of Graz, Graz, Austria.
  • Georg Langs
    Department of Biomedical Imaging and Image-guided Therapy Computational Imaging Research Lab, Medical University of Vienna Vienna Austria.
  • Helmut Denk
    Institute of Pathology Medical University Graz Graz Austria.
  • Kurt Zatloukal
    Medical University of Graz, Graz, Austria.
  • Heimo Müller
    Medical University of Graz, Graz, Austria.

Keywords

No keywords available for this article.