FedVGM: Enhancing Federated Learning Performance on Multi-Dataset Medical Images with XAI.

Journal: IEEE journal of biomedical and health informatics
Published Date:

Abstract

Advances in deep learning have transformed medical imaging, yet progress is hindered by data privacy regulations and fragmented datasets across institutions. To address these challenges, we propose FedVGM, a privacy-preserving federated learning framework for multi-modal medical image analysis. FedVGM integrates four imaging modalities, including brain MRI, breast ultrasound, chest X-ray, and lung CT, across 14 diagnostic classes without centralizing patient data. Using transfer learning and an ensemble of VGG16 and MobileNetV2, FedVGM achieves 97.7% $\pm$ 0.01 accuracy on the combined dataset and 91.9-99.1% across individual modalities. We evaluated three aggregation strategies and demonstrated median aggregation to be the most effective. To ensure clinical interpretability, we apply explainable AI techniques and validate results through performance metrics, statistical analysis, and k-fold cross-validation. FedVGM offers a robust, scalable solution for collaborative medical diagnostics, supporting clinical deployment while preserving data privacy.

Authors

  • Mst Sazia Tahosin
  • Md Alif Sheakh
  • Mohammad Jahangir Alam
  • Md Mehedi Hassan
    School of Food and Biological Engineering, Jiangsu University, Zhenjiang 212013, PR China.
  • Anupam Kumar Bairagi
    Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh.
  • Shahab Abdulla
  • Samah Alshathri
    Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia.
  • Walid El-Shafai
    Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt.

Keywords

No keywords available for this article.