Obtaining genetics insights from deep learning via explainable artificial intelligence.

Journal: Nature reviews. Genetics
Published Date:

Abstract

Artificial intelligence (AI) models based on deep learning now represent the state of the art for making functional predictions in genomics research. However, the underlying basis on which predictive models make such predictions is often unknown. For genomics researchers, this missing explanatory information would frequently be of greater value than the predictions themselves, as it can enable new insights into genetic processes. We review progress in the emerging area of explainable AI (xAI), a field with the potential to empower life science researchers to gain mechanistic insights into complex deep learning models. We discuss and categorize approaches for model interpretation, including an intuitive understanding of how each approach works and their underlying assumptions and limitations in the context of typical high-throughput biological datasets.

Authors

  • Gherman Novakovsky
    Centre for Molecular Medicine and Therapeutics, BC Children's Hospital Research Institute, Vancouver, BC, V5Z 4H4, Canada.
  • Nick Dexter
    Department of Mathematics, Simon Fraser University, Burnaby, British Columbia, Canada.
  • Maxwell W Libbrecht
    Department of Computer Science and Engineering, University of Washington, 185 Stevens Way, Seattle, Washington 98195-2350, USA.
  • Wyeth W Wasserman
    Centre for Molecular Medicine and Therapeutics, Child and Family Research Institute, Department of Medical Genetics, University of British Columbia Vancouver, British Columbia V5Z 4H4, Canada. Electronic address: wyeth@cmmt.ubc.ca.
  • Sara Mostafavi
    Department of Statistics, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; cb@hms.harvard.edu saram@stat.ubc.ca.