Deciphering RNA splicing logic with interpretable machine learning.

Journal: Proceedings of the National Academy of Sciences of the United States of America
PMID:

Abstract

Machine learning methods, particularly neural networks trained on large datasets, are transforming how scientists approach scientific discovery and experimental design. However, current state-of-the-art neural networks are limited by their uninterpretability: Despite their excellent accuracy, they cannot describe how they arrived at their predictions. Here, using an "interpretable-by-design" approach, we present a neural network model that provides insights into RNA splicing, a fundamental process in the transfer of genomic information into functional biochemical products. Although we designed our model to emphasize interpretability, its predictive accuracy is on par with state-of-the-art models. To demonstrate the model's interpretability, we introduce a visualization that, for any given exon, allows us to trace and quantify the entire decision process from input sequence to output splicing prediction. Importantly, the model revealed uncharacterized components of the splicing logic, which we experimentally validated. This study highlights how interpretable machine learning can advance scientific discovery.

Authors

  • Susan E Liao
    Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, New York, NY 10012.
  • Mukund Sudarshan
    Courant Institute of Mathematical Sciences, New York University.
  • Oded Regev
    Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, New York, NY 10012.