Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review.
Journal:
Computers in biology and medicine
Published Date:
Dec 19, 2024
Abstract
BACKGROUND: The interpretability and explainability of machine learning (ML) and artificial intelligence systems are critical for generating trust in their outcomes in fields such as medicine and healthcare. Errors generated by these systems, such as inaccurate diagnoses or treatments, can have serious and even life-threatening effects on patients. Explainable Artificial Intelligence (XAI) is emerging as an increasingly significant area of research nowadays, focusing on the black-box aspect of sophisticated and difficult-to-interpret ML algorithms. XAI techniques such as Local Interpretable Model-Agnostic Explanations (LIME) can give explanations for these models, raising confidence in the systems and improving trust in their predictions. Numerous works have been published that respond to medical problems through the use of ML models in conjunction with XAI algorithms to give interpretability and explainability. The primary objective of the study is to evaluate the performance of the newly emerging LIME techniques within healthcare domains that require more attention in the realm of XAI research.