Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022).

Journal: Computer methods and programs in biomedicine
Published Date:

Abstract

BACKGROUND AND OBJECTIVES: Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community.

Authors

  • Hui Wen Loh
    School of Science and Technology, Singapore University of Social Sciences, Singapore, Singapore.
  • Chui Ping Ooi
    School of Science and Technology, Singapore University of Social Sciences, Singapore, Singapore.
  • Silvia Seoni
    Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy.
  • Prabal Datta Barua
    Cogninet Australia, Sydney, NSW 2010 Australia.
  • Filippo Molinari
    Department of Electronics and Telecommunications, Politecnico di Torino, Italy.
  • U Rajendra Acharya
    School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Darling Heights, Australia.