Communication Efficient Federated Learning for Multi-Organ Segmentation via Knowledge Distillation With Image Synthesis.
Journal:
IEEE transactions on medical imaging
PMID:
40030865
Abstract
Federated learning (FL) methods for multi-organ segmentation in CT scans are gaining popularity, but generally require numerous rounds of parameter exchange between a central server and clients. This repetitive sharing of parameters between server and clients may not be practical due to the varying network infrastructures of clients and the large transmission of data. Further increasing repetitive sharing results from data heterogeneity among clients, i.e., clients may differ with respect to the type of data they share. For example, they might provide label maps of different organs (i.e. partial labels) as segmentations of all organs shown in the CT are not part of their clinical protocol. To this end, we propose an efficient communication approach for FL with partial labels. Specifically, parameters of local models are transmitted once to a central server and the global model is trained via knowledge distillation (KD) of the local models. While one can make use of unlabeled public data as inputs for KD, the model accuracy is often limited due to distribution shifts between local and public datasets. Herein, we propose to generate synthetic images from clients' models as additional inputs to mitigate data shifts between public and local data. In addition, our proposed method offers flexibility for additional finetuning through several rounds of communication using existing FL algorithms, leading to enhanced performance. Extensive evaluation on public datasets in few communication FL scenario reveals that our approach substantially improves over state-of-the-art methods.