Unsupervised cross-modality domain adaptation via source-domain labels guided contrastive learning for medical image segmentation.

Journal: Medical & biological engineering & computing
Published Date:

Abstract

Unsupervised domain adaptation (UDA) offers a promising approach to enhance discriminant performance on target domains by utilizing domain adaptation techniques. These techniques enable models to leverage knowledge from the source domain to adjust to the feature distribution in the target domain. This paper proposes a unified domain adaptation framework to carry out cross-modality medical image segmentation from two perspectives: image and feature. To achieve image alignment, the loss function of Fourier-based Contrastive Style Augmentation (FCSA) has been fine-tuned to increase the impact of style change for improving system robustness. For feature alignment, a module called Source-domain Labels Guided Contrastive Learning (SLGCL) has been designed to encourage the target domain to align features of different classes with those in the source domain. In addition, a generative adversarial network has been incorporated to ensure consistency in spatial layout and local context in generated image space. According to our knowledge, our method is the first attempt to utilize source domain class intensity information to guide target domain class intensity information for feature alignment in an unsupervised domain adaptation setting. Extensive experiments conducted on a public whole heart image segmentation task demonstrate that our proposed method outperforms state-of-the-art UDA methods for medical image segmentation.

Authors

  • Wenshuang Chen
    School of Electronic and Information Engineering, South China University of Technology, Wushan Road 381, Guangzhou, Guangdong, 510641, China.
  • Qi Ye
    Department of Pharmaceutical Sciences, Northeastern University, Boston, MA 02115, USA.
  • Lihua Guo
  • Qi Wu
    Endoscopy Center, Peking University Cancer Hospital and Institute, Beijing, China.