A semi-supervised prototypical network for prostate lesion segmentation from multimodality MRI.

Journal: Physics in medicine and biology
PMID:

Abstract

Prostate lesion segmentation from multiparametric magnetic resonance images is particularly challenging due to the limited availability of labeled data. This scarcity of annotated images makes it difficult for supervised models to learn the complex features necessary for accurate lesion detection and segmentation.We proposed a novel semi-supervised algorithm that embeds prototype learning into mean-teacher (MT) training to improve the feature representation for unlabeled data. In this method, pseudo-labels generated by the teacher network simultaneously serve as supervision for unlabeled prototype-based segmentation. By enabling prototype segmentation to operate across labeled and unlabeled data, the network enriches the pool of "lesion representative prototypes", and allows prototypes to flow bidirectionally-from support-to-query and query-to-support paths. This intersected, bidirectional information flow strengthens the model's generalization ability. This approach is distinct from the MT algorithm as it involves few-shot training and differs from prototypical learning for adopting unlabeled data for training.This study evaluated multiple datasets with 767 patients from three different institutions, including the publicly available PROSTATEx/PROSTATEx2 datasets as the holdout institute for reproducibility. The experimental results showed that the proposed algorithm outperformed state-of-the-art semi-supervised methods with limited labeled data, observing an improvement in Dice similarity coefficient with increasing labeled data, ranging from 0.04 to 0.09.Our method shows promise in improving segmentation outcomes with limited labeled data and potentially aiding clinicians in making informed patient treatment and management decisions8The algorithm implementation has been made available on GitHub viagit@github.com:yanwenCi/semi-proto-seg.git...

Authors

  • Wen Yan
    Department of Dermatology, Affiliated Hospital of Zunyi Medical University, Zunyi, China.
  • Yipeng Hu
    Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK.
  • Qianye Yang
    Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK.
  • Yunguan Fu
    Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK; InstaDeep, London, UK.
  • Tom Syer
    Centre for Medical Imaging, University College London, London, United Kingdom.
  • Zhe Min
    UCL Hawkes Institute; Department of Medical Physics and Biomedical Engineering, University College London, Gower St., London WC1E 6BT, London, United Kingdom.
  • Shonit Punwani
    Centre for Medical Imaging, University College London, 2nd Floor Charles Bell House, 43-45 Foley Street, London, W1W 7TS, UK. shonit.punwani@gmail.com.
  • Mark Emberton
    Division of Surgery and Interventional Science, University College London, London, UK.
  • Dean C Barratt
    Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK.
  • Carmen C M Cho
    Department of Imaging and Interventional Radiology, Prince of Wales Hospital, 30-32 Ngan Shing Street, Shatin, New Territories, Hong Kong Special Administrative Region of China, People's Republic of China.
  • Bernard Chiu
    Department of Electronic Engineering, City University of Hong Kong, Kowloon, Hong Kong, China. Electronic address: bcychiu@cityu.edu.hk.