Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences.

Journal: Medical physics
PMID:

Abstract

BACKGROUND: Self-supervised learning (SSL) is an approach to extract useful feature representations from unlabeled data, and enable fine-tuning on downstream tasks with limited labeled examples. Self-pretraining is a SSL approach that uses curated downstream task dataset for both pretraining and fine-tuning. Availability of large, diverse, and uncurated public medical image sets presents the opportunity to potentially create foundation models by applying SSL in the "wild" that are robust to imaging variations. However, the benefit of wild- versus self-pretraining has not been studied for medical image analysis.

Authors

  • Jue Jiang
    Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA.
  • Aneesh Rangnekar
    Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA.
  • Harini Veeraraghavan
    Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY; veerarah@mskcc.org.