A hybrid explainable federated-based vision transformer framework for breast cancer prediction via risk factors.
Journal:
Scientific reports
Published Date:
May 27, 2025
Abstract
Breast cancer remains a leading cause of mortality in women, underscoring the need for timely and accurate diagnosis. This paper addresses this challenge by introducing a comprehensive explainable federated learning framework for breast cancer prediction. We evaluate three deep learning approaches in both centralized and federated scenario settings: (1) individual artificial intelligence (AI) models, (2) high-level feature space ensemble models, and (3) a hybrid model combining global Vision Transformer (ViT) and local convolutional neural network (CNN) features. These models are assessed on binary, multi-class, and Breast Imaging Reporting and Data System (BI-RADS) classification tasks using a unique dataset encompassing real-world risk factors. In the federated scenario, we employ three clients with the same approaches as the centralized setting, aggregating their predictions using an AI global model. Explainable AI (XAI) technique is incorporated to enhance AI models' transparency. Our federated learning approach demonstrates superior performance, achieving accuracies of 98.65%, 97.30%, and 95.59% for binary, multi-class, and BI-RADS tasks, respectively. The proposed model, evaluated with a 95% Confidence Interval (CI) and Areas Under Curve (AUC) curves, registers top classifiers with an AUC of 0.970 [0.917-1]. Local Interpretable Model-Agnostic Explanations (LIME) XAI-based federated learning framework offers a promising solution for privacy-preserving and accurate breast cancer prediction in both research and clinical practice.