From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks.
Journal:
Computer methods and programs in biomedicine
Published Date:
Apr 16, 2023
Abstract
BACKGROUND: Explainable artificial intelligence (XAI) is a technology that can enhance trust in mental state classifications by providing explanations for the reasoning behind artificial intelligence (AI) models outputs, especially for high-dimensional and highly-correlated brain signals. Feature importance and counterfactual explanations are two common approaches to generate these explanations, but both have drawbacks. While feature importance methods, such as shapley additive explanations (SHAP), can be computationally expensive and sensitive to feature correlation, counterfactual explanations only explain a single outcome instead of the entire model.