Accounting for data variability in multi-institutional distributed deep learning for medical imaging.

Journal: Journal of the American Medical Informatics Association : JAMIA
Published Date:

Abstract

OBJECTIVES: Sharing patient data across institutions to train generalizable deep learning models is challenging due to regulatory and technical hurdles. Distributed learning, where model weights are shared instead of patient data, presents an attractive alternative. Cyclical weight transfer (CWT) has recently been demonstrated as an effective distributed learning method for medical imaging with homogeneous data across institutions. In this study, we optimize CWT to overcome performance losses from variability in training sample sizes and label distributions across institutions.

Authors

  • Niranjan Balachandar
    Department of Radiology and Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA.
  • Ken Chang
    Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts.
  • Jayashree Kalpathy-Cramer
    Department of Radiology, MGH/Harvard Medical School, Charlestown, Massachusetts.
  • Daniel L Rubin
    Department of Biomedical Data Science, Stanford University School of Medicine Medical School Office Building, Stanford CA 94305-5479.