Contrastive rendering with semi-supervised learning for ovary and follicle segmentation from 3D ultrasound.

Journal: Medical image analysis
Published Date:

Abstract

Segmentation of ovary and follicles from 3D ultrasound (US) is the crucial technique of measurement tools for female infertility diagnosis. Since manual segmentation is time-consuming and operator-dependent, an accurate and fast segmentation method is highly demanded. However, it is challenging for current deep-learning based methods to segment ovary and follicles precisely due to ambiguous boundaries and insufficient annotations. In this paper, we propose a contrastive rendering (C-Rend) framework to segment ovary and follicles with detail-refined boundaries. Furthermore, we incorporate the proposed C-Rend with a semi-supervised learning (SSL) framework, leveraging unlabeled data for better performance. Highlights of this paper include: (1) A rendering task is performed to estimate boundary accurately via enriched feature representation learning. (2) Point-wise contrastive learning is proposed to enhance the similarity of intra-class points and contrastively decrease the similarity of inter-class points. (3) The C-Rend plays a complementary role for the SSL framework in uncertainty-aware learning, which could provide reliable supervision information and achieve superior segmentation performance. Through extensive validation on large in-house datasets with partial annotations, our method outperforms state-of-the-art methods in various evaluation metrics for both the ovary and follicles.

Authors

  • Xin Yang
    Department of Oral Maxillofacial-Head Neck Oncology, Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, Shanghai, China.
  • Haoming Li
  • Yi Wang
    Department of Neurology, Children's Hospital of Fudan University, National Children's Medical Center, Shanghai, China.
  • Xiaowen Liang
  • Chaoyu Chen
    National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China.
  • Xu Zhou
    School of Biomedical Engineering, Health Center, Shenzhen University, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China.
  • Fengyi Zeng
    Department of Ultrasound Medicine, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
  • Jinghui Fang
  • Alejandro Frangi
    School of Biomedical Engineering, Health Center, Shenzhen University, China; Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK; Medical Imaging Research Center (MIRC), University Hospital Gasthuisberg, Electrical Engineering Department, KU Leuven, Leuven, Belgium.
  • Zhiyi Chen
    School of Pharmaceutical Science and Technology, Hangzhou Institute for Advanced Study, UCAS, Hangzhou, Zhejiang Province, China.
  • Dong Ni