On the evaluation of deep learning interpretability methods for medical images under the scope of faithfulness.
Journal:
Computer methods and programs in biomedicine
Published Date:
May 28, 2024
Abstract
BACKGROUND AND OBJECTIVE: Evaluating the interpretability of Deep Learning models is crucial for building trust and gaining insights into their decision-making processes. In this work, we employ class activation map based attribution methods in a setting where only High-Resolution Class Activation Mapping (HiResCAM) is known to produce faithful explanations. The objective is to evaluate the quality of the attribution maps using quantitative metrics and investigate whether faithfulness aligns with the metrics results.