Automated cardiac segmentation of cross-modal medical images using unsupervised multi-domain adaptation and spatial neural attention structure.

Journal: Medical image analysis
Published Date:

Abstract

Accurate cardiac segmentation of multimodal images, e.g., magnetic resonance (MR), computed tomography (CT) images, plays a pivot role in auxiliary diagnoses, treatments and postoperative assessments of cardiovascular diseases. However, training a well-behaved segmentation model for the cross-modal cardiac image analysis is challenging, due to their diverse appearances/distributions from different devices and acquisition conditions. For instance, a well-trained segmentation model based on the source domain of MR images is often failed in the segmentation of CT images. In this work, a cross-modal images-oriented cardiac segmentation scheme is proposed using a symmetric full convolutional neural network (SFCNN) with the unsupervised multi-domain adaptation (UMDA) and a spatial neural attention (SNA) structure, termed UMDA-SNA-SFCNN, having the merits of without the requirement of any annotation on the test domain. Specifically, UMDA-SNA-SFCNN incorporates SNA to the classic adversarial domain adaptation network to highlight the relevant regions, while restraining the irrelevant areas in the cross-modal images, so as to suppress the negative transfer in the process of unsupervised domain adaptation. In addition, the multi-layer feature discriminators and a predictive segmentation-mask discriminator are established to connect the multi-layer features and segmentation mask of the backbone network, SFCNN, to realize the fine-grained alignment of unsupervised cross-modal feature domains. Extensive confirmative and comparative experiments on the benchmark Multi-Modality Whole Heart Challenge dataset show that the proposed model is superior to the state-of-the-art cross-modal segmentation methods.

Authors

  • Jinping Liu
    Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan, 410081, China. ljp202518@163.com.
  • Hui Liu
    Institute of Urology and Nephrology, The First Affiliated Hospital of Guangxi Medical University, Nanning, China.
  • Subo Gong
    Department of Geriatrics, The Second Xiangya Hospital of Central South University, Changsha, 410011, China. aibaoziaidami@163.com.
  • Zhaohui Tang
    School of Mathematics, Physics and Computing, Centre for Health Research, University of Southern Queensland, Toowoomba, QLD, Australia.
  • Yongfang Xie
    School of Automation, Central South University, Changsha City, 410083, China.
  • Huazhan Yin
    Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha 410081, China; Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, 410081, China.
  • Jean Paul Niyoyita
    College of Science and Technology, University of Rwanda, Kigali, 3286, Rwanda.