Semi-supervised learning towards automated segmentation of PET images with limited annotations: application to lymphoma patients.

Journal: Physical and engineering sciences in medicine
PMID:

Abstract

Manual segmentation poses a time-consuming challenge for disease quantification, therapy evaluation, treatment planning, and outcome prediction. Convolutional neural networks (CNNs) hold promise in accurately identifying tumor locations and boundaries in PET scans. However, a major hurdle is the extensive amount of supervised and annotated data necessary for training. To overcome this limitation, this study explores semi-supervised approaches utilizing unlabeled data, specifically focusing on PET images of diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL) obtained from two centers. We considered 2-[F]FDG PET images of 292 patients PMBCL (n = 104) and DLBCL (n = 188) (n = 232 for training and validation, and n = 60 for external testing). We harnessed classical wisdom embedded in traditional segmentation methods, such as the fuzzy clustering loss function (FCM), to tailor the training strategy for a 3D U-Net model, incorporating both supervised and unsupervised learning approaches. Various supervision levels were explored, including fully supervised methods with labeled FCM and unified focal/Dice loss, unsupervised methods with robust FCM (RFCM) and Mumford-Shah (MS) loss, and semi-supervised methods combining FCM with supervised Dice loss (MS + Dice) or labeled FCM (RFCM + FCM). The unified loss function yielded higher Dice scores (0.73 ± 0.11; 95% CI 0.67-0.8) than Dice loss (p value < 0.01). Among the semi-supervised approaches, RFCM + αFCM (α = 0.3) showed the best performance, with Dice score of 0.68 ± 0.10 (95% CI 0.45-0.77), outperforming MS + αDice for any supervision level (any α) (p < 0.01). Another semi-supervised approach with MS + αDice (α = 0.2) achieved Dice score of 0.59 ± 0.09 (95% CI 0.44-0.76) surpassing other supervision levels (p < 0.01). Given the time-consuming nature of manual delineations and the inconsistencies they may introduce, semi-supervised approaches hold promise for automating medical imaging segmentation workflows.

Authors

  • Fereshteh Yousefirizi
    Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada. Electronic address: frizi@bccrc.ca.
  • Isaac Shiri
    Biomedical and Health Informatics, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran.
  • Joo Hyun O
    College of Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea.
  • Ingrid Bloise
    BC Cancer, Vancouver, BC, Canada.
  • Patrick Martineau
    BC Cancer, Vancouver, BC, Canada.
  • Don Wilson
    BC Cancer, Vancouver, BC, Canada.
  • François Bénard
    Department of Molecular Oncology, BC Cancer, Vancouver, British Columbia, Canada.
  • Laurie H Sehn
    BC Cancer, Vancouver, BC, Canada.
  • Kerry J Savage
    BC Cancer, Vancouver, BC, Canada.
  • Habib Zaidi
    Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland. habib.zaidi@hcuge.ch.
  • Carlos F Uribe
    Department of Molecular Oncology, BC Cancer, Vancouver, British Columbia, Canada.
  • Arman Rahmim