Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance.

Journal: Computers in biology and medicine
Published Date:

Abstract

BACKGROUND: Self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis. Current findings indicate a strong potential for contrastive pre-training on medical images. However, further research is necessary to incorporate the particular characteristics of these images.

Authors

  • Daniel Wolf
    Department of Psychiatry, University of Pennsylvania, Philadelphia, USA.
  • Tristan Payer
    Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany.
  • Catharina Silvia Lisson
    Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany.
  • Christoph Gerhard Lisson
    Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany.
  • Meinrad Beer
    Diagnostic and Interventional Radiology, University Hospital Ulm, Germany.
  • Michael Götz
    Medical Image Analysis, Division Medical Image Computing, DKFZ Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany. Electronic address: m.goetz@dkfz-heidelberg.de.
  • Timo Ropinski
    Visual Computing Group, Institute of Media Informatics, Ulm University, Ulm, Germany.