MASS: Modality-collaborative semi-supervised segmentation by exploiting cross-modal consistency from unpaired CT and MRI images.

Journal: Medical image analysis
Published Date:

Abstract

Training deep segmentation models for medical images often requires a large amount of labeled data. To tackle this issue, semi-supervised segmentation has been employed to produce satisfactory delineation results with affordable labeling cost. However, traditional semi-supervised segmentation methods fail to exploit unpaired multi-modal data, which are widely adopted in today's clinical routine. In this paper, we address this point by proposing Modality-collAborative Semi-Supervised segmentation (i.e., MASS), which utilizes the modality-independent knowledge learned from unpaired CT and MRI scans. To exploit such knowledge, MASS uses cross-modal consistency to regularize deep segmentation models in aspects of both semantic and anatomical spaces, from which MASS learns intra- and inter-modal correspondences to warp atlases' labels for making predictions. For better capturing inter-modal correspondence, from a perspective of feature alignment, we propose a contrastive similarity loss to regularize the latent space of both modalities in order to learn generalized and robust modality-independent representations. Compared to semi-supervised and multi-modal segmentation counterparts, the proposed MASS brings nearly 6% improvements under extremely limited supervision.

Authors

  • Xiaoyu Chen
  • Hong-Yu Zhou
    Department of Respiratory and Critical Care Medicine, National Clinical Research Center for Geriatrics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu 610041, P.R. China; Department of Computer Science, The University of Hong Kong, Pokfulam, Hong Kong.
  • Feng Liu
    Department of Vascular and Endovascular Surgery, The First Medical Center of Chinese PLA General Hospital, 100853 Beijing, China.
  • Jiansen Guo
    Department of Computer Science, Xiamen University, Siming District, Xiamen, Fujian, PR China.
  • Liansheng Wang
    Department of Computer Science, Xiamen University, Xiamen 361005, China.
  • Yizhou Yu
    Department of Computer Science, The University of Hong Kong, Pok Fu Lam, Hong Kong.