Explainable Artificial Intelligence in Radiological Cardiovascular Imaging-A Systematic Review.

Journal: Diagnostics (Basel, Switzerland)
Published Date:

Abstract

Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the "black box" nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. This systematic review synthesizes current research on the use of XAI methods in radiological cardiovascular imaging. A systematic literature search was conducted in PubMed, Scopus, and Web of Science to identify peer-reviewed original research articles published between January 2015 and March 2025. Studies were included if they applied XAI techniques-such as Gradient-Weighted Class Activation Mapping (Grad-CAM), Shapley Additive Explanations (SHAPs), Local Interpretable Model-Agnostic Explanations (LIMEs), or saliency maps-to cardiovascular imaging modalities, including cardiac computed tomography (CT), magnetic resonance imaging (MRI), echocardiography and other ultrasound examinations, and chest X-ray (CXR). Studies focusing on nuclear medicine, structured/tabular data without imaging, or lacking concrete explainability features were excluded. Screening and data extraction followed PRISMA guidelines. A total of 28 studies met the inclusion criteria. Ultrasound examinations ( = 9) and CT ( = 9) were the most common imaging modalities, followed by MRI ( = 6) and chest X-rays ( = 4). Clinical applications included disease classification (e.g., coronary artery disease and valvular heart disease) and the detection of myocardial or congenital abnormalities. Grad-CAM was the most frequently employed XAI method, followed by SHAP. Most studies used saliency-based techniques to generate visual explanations of model predictions. XAI holds considerable promise for improving the transparency and clinical acceptance of deep learning models in cardiovascular imaging. However, the evaluation of XAI methods remains largely qualitative, and standardization is lacking. Future research should focus on the robust, quantitative assessment of explainability, prospective clinical validation, and the development of more advanced XAI techniques beyond saliency-based methods. Strengthening the interpretability of AI models will be crucial to ensuring their safe, ethical, and effective integration into cardiovascular care.

Authors

  • Matteo Haupt
    Department of Diagnostic and Interventional Radiology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany.
  • Martin H Maurer
    Department of Diagnostic and Interventional Radiology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany.
  • Rohit Philip Thomas
    Department of Diagnostic and Interventional Radiology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany.

Keywords

No keywords available for this article.