Explainable Graph Neural Networks in Chemistry: Combining Attribution and Uncertainty Quantification.

Journal: Journal of chemical information and modeling
Published Date:

Abstract

Graph Neural Networks (GNNs) are powerful tools for predicting chemical properties, but their black-box nature can limit trust and utility. Explainability through feature attribution and awareness of prediction uncertainty are critical for practical applications, for example in iterative lab-in-the-loop scenarios. We systematically evaluate different posthoc feature attribution methods and study their integration with uncertainty quantification in GNNs for chemistry. Our findings reveal a strong synergy: attributing uncertainty to specific input features (atoms or substructures) provides a granular understanding of model confidence and highlights potential data gaps or model limitations. We evaluated several attribution approaches on aqueous solubility and molecular weight prediction tasks, demonstrating that methods like Feature Ablation and Shapley Value Sampling can effectively identify molecular substructures driving prediction and its uncertainty. This combined approach significantly enhances the interpretability and actionable insights derived from chemical GNNs, facilitating the design of more useful models in research and development.

Authors

  • Leonid Komissarov
    Roche Pharmaceutical Research and Early Development, Roche Innovation Center Basel, 4070Basel, Switzerland.
  • Nenad Manevski
    Pharmaceutical Sciences, Pharma Research and Early Development, Roche Innovation Center Basel, F. Hoffmann-La Roche Ltd., Grenzacherstrasse 124, CH-4070 Basel, Switzerland.
  • Katrin Groebke Zbinden
    Roche Pharmaceutical Research and Early Development, Roche Innovation Center Basel, 4070Basel, Switzerland.
  • Lisa Sach-Peltason
    Roche Pharmaceutical Research and Early Development, Roche Innovation Center Basel, 4070Basel, Switzerland.