MTL-ABSNet: Atlas-Based Semi-Supervised Organ Segmentation Network With Multi-Task Learning for Medical Images.

Journal: IEEE journal of biomedical and health informatics
Published Date:

Abstract

Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of the existing SSLs neglected the prior shape and position information specialized in the medical images, leading to unsatisfactory localization and non-smooth of objects. In this paper, we propose a novel atlas-based semi-supervised segmentation network with multi-task learning for medical organs, named MTL-ABSNet, which incorporates the anatomical priors and makes full use of unlabeled data in a self-training and multi-task learning manner. The MTL-ABSNet consists of two components: an Atlas-Based Semi-Supervised Segmentation Network (ABSNet) and Reconstruction-Assisted Module (RAM). Specifically, the ABSNet improves the existing SSLs by utilizing atlas prior, which generates credible pseudo labels in a self-training manner; while the RAM further assists the segmentation network by capturing the anatomical structures from the original images in a multi-task learning manner. Better reconstruction quality is achieved by using MS-SSIM loss function, which further improves the segmentation accuracy. Experimental results from the liver and spleen datasets demonstrated that the performance of our method was significantly improved compared to existing state-of-the-art methods.

Authors

  • Huimin Huang
  • Qingqing Chen
  • Lanfen Lin
    State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310027, China.
  • Ming Cai
    Department of Orthopedics, Shanghai Tenth People's Hospital, Tongji University, School of Medicine, Shanghai, 200072, P.R.China.cmdoctor@tongji.edu.cn.
  • Qiaowei Zhang
  • Yutaro Iwamoto
  • Xianhua Han
  • Akira Furukawa
    National Institutes for Quantum and Radiological Science and Technology, 4-9-1 Anagawa, Inage-ku, Chiba, Japan.
  • Shuzo Kanasaki
  • Yen-Wei Chen
  • Ruofeng Tong
    State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310027, China.
  • Hongjie Hu