Improving multiple sclerosis lesion segmentation across clinical sites: A federated learning approach with noise-resilient training.

Journal: Artificial intelligence in medicine
Published Date:

Abstract

Accurately measuring the evolution of Multiple Sclerosis (MS) with magnetic resonance imaging (MRI) critically informs understanding of disease progression and helps to direct therapeutic strategy. Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area. Obtaining sufficient data from a single clinical site is challenging and does not address the heterogeneous need for model robustness. Conversely, the collection of data from multiple sites introduces data privacy concerns and potential label noise due to varying annotation standards. To address this dilemma, we explore the use of the federated learning framework while considering label noise. Our approach enables collaboration among multiple clinical sites without compromising data privacy under a federated learning paradigm that incorporates a noise-robust training strategy based on label correction. Specifically, we introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions, enabling the correction of false annotations based on prediction confidence. We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites, enhancing the reliability of the correction process. Extensive experiments conducted on two multi-site datasets demonstrate the effectiveness and robustness of our proposed methods, indicating their potential for clinical applications in multi-site collaborations to train better deep learning models with lower cost in data collection and annotation.

Authors

  • Lei Bai
    Shanghai AI Laboratory, Shanghai, China.
  • Dongang Wang
    Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia. Electronic address: dongang.wang@sydney.edu.au.
  • Hengrui Wang
    Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia.
  • Michael Barnett
    Brain and Mind Centre, The University of Sydney, Sydney 2050, Australia; Sydney Neuroimaging Analysis Centre, Sydney 2050, Australia.
  • Mariano Cabezas
    Research institute of Computer Vision and Robotics, University of Girona, Spain.
  • Weidong Cai
    School of Computer Science, The University of Sydney, Darlington, WA, Australia.
  • Fernando Calamante
    School of Biomedical Engineering, The University of Sydney, Sydney, NSW 2006, Australia; Sydney Imaging - The University of Sydney, Sydney, Australia.
  • Kain Kyle
    Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia.
  • Dongnan Liu
  • Linda Ly
    Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia.
  • Aria Nguyen
    Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia.
  • Chun-Chien Shieh
    Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia.
  • Ryan Sullivan
    School of Biomedical Engineering, The University of Sydney, NSW 2006, Australia; Australian Imaging Service, NSW 2006, Australia.
  • Geng Zhan
    Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia.
  • Wanli Ouyang
    Shanghai AI Laboratory, Shanghai, China.
  • Chenyu Wang
    Brain and Mind Centre, The University of Sydney, Sydney 2050, Australia; Sydney Neuroimaging Analysis Centre, Sydney 2050, Australia. Electronic address: chenyu.wang@sydney.edu.au.