Coloring Molecules with Explainable Artificial Intelligence for Preclinical Relevance Assessment.

Journal: Journal of chemical information and modeling
Published Date:

Abstract

Graph neural networks are able to solve certain drug discovery tasks such as molecular property prediction and molecule generation. However, these models are considered "black-box" and "hard-to-debug". This study aimed to improve modeling transparency for rational molecular design by applying the integrated gradients explainable artificial intelligence (XAI) approach for graph neural network models. Models were trained for predicting plasma protein binding, hERG channel inhibition, passive permeability, and cytochrome P450 inhibition. The proposed methodology highlighted molecular features and structural elements that are in agreement with known pharmacophore motifs, correctly identified property cliffs, and provided insights into unspecific ligand-target interactions. The developed XAI approach is fully open-sourced and can be used by practitioners to train new models on other clinically relevant endpoints.

Authors

  • José Jiménez-Luna
    Computational Science Laboratory , Parc de Recerca Biomèdica de Barcelona , Universitat Pompeu Fabra , C Dr Aiguader 88 , Barcelona , 08003 , Spain . Email: gianni.defabritiis@upf.edu.
  • Miha Škalič
    Computational Biophysics Laboratory, Universitat Pompeu Fabra , Parc de Recerca Biomèdica de Barcelona, Carrer del Dr. Aiguader 88, Barcelona 08003, Spain.
  • Nils Weskamp
    Department of Medicinal Chemistry, Boehringer Ingelheim Pharma GmbH & Co. KG, Birkendorfer Straße 65, 88397 Biberach an der Riss, Germany.
  • Gisbert Schneider
    Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied Biosciences, Vladimir-Prelog-Weg 4, CH-8093, Zurich, Switzerland.