An Interpretable and Accurate Deep-Learning Diagnosis Framework Modeled With Fully and Semi-Supervised Reciprocal Learning.

Journal: IEEE transactions on medical imaging
Published Date:

Abstract

The deployment of automated deep-learning classifiers in clinical practice has the potential to streamline the diagnosis process and improve the diagnosis accuracy, but the acceptance of those classifiers relies on both their accuracy and interpretability. In general, accurate deep-learning classifiers provide little model interpretability, while interpretable models do not have competitive classification accuracy. In this paper, we introduce a new deep-learning diagnosis framework, called InterNRL, that is designed to be highly accurate and interpretable. InterNRL consists of a student-teacher framework, where the student model is an interpretable prototype-based classifier (ProtoPNet) and the teacher is an accurate global image classifier (GlobalNet). The two classifiers are mutually optimised with a novel reciprocal learning paradigm in which the student ProtoPNet learns from optimal pseudo labels produced by the teacher GlobalNet, while GlobalNet learns from ProtoPNet's classification performance and pseudo labels. This reciprocal learning paradigm enables InterNRL to be flexibly optimised under both fully- and semi-supervised learning scenarios, reaching state-of-the-art classification performance in both scenarios for the tasks of breast cancer and retinal disease diagnosis. Moreover, relying on weakly-labelled training images, InterNRL also achieves superior breast cancer localisation and brain tumour segmentation results than other competing methods.

Authors

  • Chong Wang
    Shandong Xinhua Pharmaceutical Co., Ltd., No. 1, Lu Tai Road, High Tech Zone, Zibo 255199, China.
  • Yuanhong Chen
  • Fengbei Liu
  • Michael Elliott
  • Chun Fung Kwok
  • Carlos Pena-Solorzano
  • Helen Frazer
    Screening and Assessment Service, St Vincent's BreastScreen, 1st Floor Healy Wing, 41 Victoria Parade, Fitzroy, Victoria, 3065, Australia. Electronic address: Helen.Frazer@svha.org.au.
  • Davis James McCarthy
  • Gustavo Carneiro
    Australian Centre for Visual Technologies, The University of Adelaide, Australia. Electronic address: gustavo.carneiro@adelaide.edu.au.