Multi-modality medical image classification with ResoMergeNet for cataract, lung cancer, and breast cancer diagnosis.

Journal: Computers in biology and medicine
PMID:

Abstract

The variability in image modalities presents significant challenges in medical image classification, as traditional deep learning models often struggle to adapt to different image types, leading to suboptimal performance across diverse datasets. This is critical in the diagnosis of conditions like cataracts and cancers, where imaging data spans various modalities, including visible eye images for cataracts and histopathological images for cancers among others. Cataracts, a leading cause of blindness, and lung and breast cancers, major contributors to cancer-related deaths, require early detection for effective intervention. However, many existing models fall short in handling modality differences, limiting their performance. To address this, we propose ResoMergeNet (RMN), designed to handle multi-modal medical image classification. RMN integrates transfer learning and advanced techniques - the ResBoost framework and ConvMergeNet, enabling the model to effectively extract relevant features from both visible eye images and histopathological images. The model's architecture emphasizes both global and local feature extraction while minimizing the influence of irrelevant data, thus improving classification performance across modalities. Evaluated on the cataract dataset (binary classification), RMN achieved an accuracy of 99.17 %. For lung cancer (3-class classification), it attained 100 % accuracy, while on the BreakHis (8-class classification) dataset, RMN reached 99.24 % accuracy at 100× magnification and 98.28 % at 200× magnification. These results demonstrate RMN's robustness and adaptability to varying image modalities, highlighting its potential as a reliable diagnostic tool in medical settings. Through its versatility, RMN offers a promising solution for improving early diagnosis and healthcare outcomes in the fight against cataracts, lung, and breast cancers.

Authors

  • Chukwuebuka Joseph Ejiyi
    College of Nuclear Technology and Automation Engineering, Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Research Center, Chengdu University of Technology, Sichuan, Chengdu, China; Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, Sichuan, China.
  • Dongsheng Cai
    Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Technology Research Centre, Chengdu University of Technology, Chenghua District, Chengdu, Sichuan, People's Republic of China. caidongsheng@cdut.edu.cn.
  • Delali Linda Fiasam
    School of Information and Software Engineering, University of Electronic Science and Technology of China, Sichuan, China. Electronic address: linda.delali92@outlook.com.
  • Bonsu Adjei-Arthur
    School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, China. Electronic address: peeman34@gmail.com.
  • Sandra Obiora
    Leeds Business School, Leeds Beckett University, Leeds, LS1 3HB, United Kingdom.
  • Browne Judith Ayekai
    School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, China. Electronic address: ayekaibrowne@hotmail.com.
  • Sarpong K Asare
    School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China. Electronic address: sarpong.kasare@gmail.com.
  • Anto Leoba Jonathan
    College of Nuclear Technology and Automation Engineering, Chengdu University of Technology, Sichuan, China. Electronic address: antoleobaj@gmail.com.
  • Zhen Qin
    College of Forestry, Southwest Forestry University, Kunming, Yunnan, China.