Explainability of Protein Deep Learning Models.

Journal: International journal of molecular sciences
Published Date:

Abstract

Protein embeddings are the new main source of information about proteins, producing state-of-the-art solutions to many problems, including protein interaction prediction, a fundamental issue in proteomics. Understanding the embeddings and what causes the interactions is very important, as these models lack transparency due to their black-box nature. In the first study of its kind, we investigate the inner workings of these models using XAI (explainable AI) approaches. We perform extensive testing (3.3 TB of total data) involving nine of the best-known XAI methods on two problems: (i) the prediction of protein interaction sites using the current top method, Seq-InSite, and (ii) the production of protein embedding vectors using three methods, ProtBERT, ProtT5, and Ankh. The results are evaluated in terms of their ability to correlate with six basic amino acid properties-aromaticity, acidity/basicity, hydrophobicity, molecular mass, van der Waals volume, and dipole moment-as well as the propensity for interaction with other proteins, the impact of distant residues, and the infidelity scores of the XAI methods. The results are unexpected. Some XAI methods are much better than others at discovering essential information. Simple methods can be as good as advanced ones. Different protein embedding vectors can capture distinct properties, indicating significant room for improvement in embedding quality.

Authors

  • Zahra Fazel
    Department of Computer Science, University of Western Ontario, London, ON N6A 5B7, Canada.
  • Camila P E de Souza
    Department of Statistical and Actuarial Sciences, The University of Western Ontario, London, ON, Canada.
  • G Brian Golding
    Department of Biology, McMaster University, 1280 Main Street West, Hamilton, Canada. golding@mcmaster.ca.
  • Lucian Ilie
    Department of Computer Science, University of Western Ontario, London, ON N6A 5B7, Canada.