UniSAL: Unified Semi-supervised Active Learning for histopathological image classification.

Journal: Medical image analysis
PMID:

Abstract

Histopathological image classification using deep learning is crucial for accurate and efficient cancer diagnosis. However, annotating a large amount of histopathological images for training is costly and time-consuming, leading to a scarcity of available labeled data for training deep neural networks. To reduce human efforts and improve efficiency for annotation, we propose a Unified Semi-supervised Active Learning framework (UniSAL) that effectively selects informative and representative samples for annotation. First, unlike most existing active learning methods that only train from labeled samples in each round, dual-view high-confidence pseudo training is proposed to utilize both labeled and unlabeled images to train a model for selecting query samples, where two networks operating on different augmented versions of an input image provide diverse pseudo labels for each other, and pseudo label-guided class-wise contrastive learning is introduced to obtain better feature representations for effective sample selection. Second, based on the trained model at each round, we design novel uncertain and representative sample selection strategy. It contains a Disagreement-aware Uncertainty Selector (DUS) to select informative uncertain samples with inconsistent predictions between the two networks, and a Compact Selector (CS) to remove redundancy of selected samples. We extensively evaluate our method on three public pathological image classification datasets, i.e., CRC5000, Chaoyang and CRC100K datasets, and the results demonstrate that our UniSAL significantly surpasses several state-of-the-art active learning methods, and reduces the annotation cost to around 10% to achieve a performance comparable to full annotation. Code is available at https://github.com/HiLab-git/UniSAL.

Authors

  • Lanfeng Zhong
    School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.
  • Kun Qian
    Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, Beijing, China.
  • Xin Liao
    Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, Guiyang, Guizhou, China.
  • Zongyao Huang
    Department of Pathology, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, University of Electronic Science and Technology of China, Chengdu, China.
  • Yang Liu
    Department of Computer Science, Hong Kong Baptist University, Hong Kong, China.
  • Shaoting Zhang
  • Guotai Wang
    Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK.