Explainable AI (XAI) for Neonatal Pain Assessment via Influence Function Modification.
Journal:
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
PMID:
40039767
Abstract
As machine learning increasingly plays a crucial role in various medical applications, the need for improved explainability of these complex, often opaque models becomes more urgent. Influence functions have emerged as a critical method for explaining these black-box models. Influence functions measure how much each training sample affects a model's predictions on test data, with previous research indicating that the most influential training samples usually exhibit a high degree of semantic similarity to the test point. Building on this concept, we propose a novel approach that modifies the influence function for more precise influence estimations. This involves adding a new weighting factor to the influence function based on the similarity of the test and training data. We employ cosine similarity, Euclidean distance, and the structural similarity index to calculate this weight. The modified influence method is evaluated on a neonatal pain assessment model to explain the decision, revealing excellent performance in identifying influential training points compared to the baseline method. These results show the effectiveness of the proposed approach in elucidating the decision-making process.