The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.

Journal: Computers in biology and medicine
Published Date:

Abstract

In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.

Authors

  • Subhan Ali
    Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway. Electronic address: subhan.ali@ntnu.no.
  • Filza Akhlaq
    Department of Computer Science, Sukkur IBA University, Sukkur, 65200, Sindh, Pakistan. Electronic address: filza.bscsf19@iba-suk.edu.pk.
  • Ali Shariq Imran
    Dept of Computer Science (IDI), Norwegian University of Science and Technology (NTNU), Gjøvik, Norway.
  • Zenun Kastrati
    Department of Informatics, Linnaeus University, Växjö, Sweden.
  • Sher Muhammad Daudpota
    Dept. of Computer Science, Sukkur IBA University, Sukkur, Pakistan.
  • Muhammad Moosa
    Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway. Electronic address: muhammad.moosa@ntnu.no.