From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks.

Journal: Computer methods and programs in biomedicine
Published Date:

Abstract

BACKGROUND: Explainable artificial intelligence (XAI) is a technology that can enhance trust in mental state classifications by providing explanations for the reasoning behind artificial intelligence (AI) models outputs, especially for high-dimensional and highly-correlated brain signals. Feature importance and counterfactual explanations are two common approaches to generate these explanations, but both have drawbacks. While feature importance methods, such as shapley additive explanations (SHAP), can be computationally expensive and sensitive to feature correlation, counterfactual explanations only explain a single outcome instead of the entire model.

Authors

  • Antonio Luca Alfeo
    Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino, 1, Pisa, 56126, Italy; Bioengineering & Robotics Research Center E. Piaggio, University of Pisa, Largo Lucio Lazzarino, 1, Pisa, 56126, Italy. Electronic address: luca.alfeo@unipi.it.
  • Antonio G Zippo
    Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Milan, Italy.
  • Vincenzo Catrambone
  • Mario G C A Cimino
    Department of Information Engineering, University of Pisa, Largo Lucio Lazzarino, 1, Pisa, 56126, Italy; Bioengineering & Robotics Research Center E. Piaggio, University of Pisa, Largo Lucio Lazzarino, 1, Pisa, 56126, Italy.
  • Nicola Toschi
    Department of Biomedicine and Prevention, University of Rome "Tor Vergata", Via Cracovia, 00133, Roma, RM, Italy; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA. Electronic address: toschi@med.uniroma2.it.
  • Gaetano Valenza