Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences.
Journal:
Medical physics
PMID:
39636237
Abstract
BACKGROUND: Self-supervised learning (SSL) is an approach to extract useful feature representations from unlabeled data, and enable fine-tuning on downstream tasks with limited labeled examples. Self-pretraining is a SSL approach that uses curated downstream task dataset for both pretraining and fine-tuning. Availability of large, diverse, and uncurated public medical image sets presents the opportunity to potentially create foundation models by applying SSL in the "wild" that are robust to imaging variations. However, the benefit of wild- versus self-pretraining has not been studied for medical image analysis.