A multimodal skin lesion classification through cross-attention fusion and collaborative edge computing.

Journal: Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
Published Date:

Abstract

Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.

Authors

  • Nhu-Y Tran-Van
    University of Information Technology, Ho Chi Minh City, Viet Nam; Vietnam National University, Ho Chi Minh City, Viet Nam. Electronic address: ytvn@uit.edu.vn.
  • Kim-Hung Le
    University of Information Technology, Ho Chi Minh City, Viet Nam; Vietnam National University, Ho Chi Minh City, Viet Nam. Electronic address: hunglk@uit.edu.vn.