Explaining deep learning for ECG analysis: Building blocks for auditing and knowledge discovery.

Journal: Computers in biology and medicine
Published Date:

Abstract

Deep neural networks have become increasingly popular for analyzing ECG data because of their ability to accurately identify cardiac conditions and hidden clinical factors. However, the lack of transparency due to the black box nature of these models is a common concern. To address this issue, explainable AI (XAI) methods can be employed. In this study, we present a comprehensive analysis of post-hoc XAI methods, investigating the glocal (aggregated local attributions over multiple samples) and global (concept based XAI) perspectives. We have established a set of sanity checks to identify saliency as the most sensible attribution method. We provide a dataset-wide analysis across entire patient subgroups, which goes beyond anecdotal evidence, to establish the first quantitative evidence for the alignment of model behavior with cardiologists' decision rules. Furthermore, we demonstrate how these XAI techniques can be utilized for knowledge discovery, such as identifying subtypes of myocardial infarction. We believe that these proposed methods can serve as building blocks for a complementary assessment of the internal validity during a certification process, as well as for knowledge discovery in the field of ECG analysis.

Authors

  • Patrick Wagner
  • Temesgen Mehari
    Fraunhofer Heinrich Hertz Institute, Berlin, Germany; Physikalisch-Technische Bundesanstalt, Berlin, Germany. Electronic address: temesgen.mehari@hhi.fraunhofer.de.
  • Wilhelm Haverkamp
    Department of Cardiology, Charité - Universitätsmedizin Berlin, Campus Virchow.
  • Nils Strodthoff
    Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany. Author to whom any correspondence should be addressed.