Enhancing explainability in ECG analysis through evidence-based AI interpretability.
Journal:
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
PMID:
40039475
Abstract
While pre-trained neural networks, e.g., for diagnosis from electrocardiograms (ECGs), are already available and show remarkable performance, their lack of transparency prevents translation to clinical practice. Recently, an explainable artificial intelligence (XAI) software framework was proposed which uses post-hoc interpretability methods to reveal regions of interest (ROIs) within an ECG that were relevant for the network's decision. However, it is not clear how these correlate with the evidence-based ECG features used by cardiologists. Hence, here we propose an extended version of the XAI framework which includes analyses based on ECG wave durations and intervals. Using a publicly-available pre-trained neural network, we predicted first degree AV block (1dAVb) and left bundle branch block (LBBB) in the PTB-XL dataset (21,414 ECGs). We used the XAI framework to extract relevances and matched them with PR interval and QRS duration provided by the PTB-XL+ dataset. For ECGs showing 1dAVb, the ROI was centered on P waves and QRS complexes with prolonged PR intervals. 96.0% of the network's predictions with high confidence were larger than the evidence-based threshold of 200ms. For ECGs showing LBBB, the ROI was centered on QRS complexes with 98.6% of high confidence predictions showing a wide QRS complex longer than 120ms. Using our extended XAI framework, we could demonstrate that a vast majority of decisions of the neural network correlate with evidence-based features. Providing this information to cardiologists next to the classification itself might facilitate clinical translation.