ICH-PRNet: a cross-modal intracerebral haemorrhage prognostic prediction method using joint-attention interaction mechanism.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Accurately predicting intracerebral hemorrhage (ICH) prognosis is a critical and indispensable step in the clinical management of patients post-ICH. Recently, integrating artificial intelligence, particularly deep learning, has significantly enhanced prediction accuracy and alleviated neurosurgeons from the burden of manual prognosis assessment. However, uni-modal methods have shown suboptimal performance due to the intricate pathophysiology of the ICH. On the other hand, existing cross-modal approaches that incorporate tabular data have often failed to effectively extract complementary information and cross-modal features between modalities, thereby limiting their prognostic capabilities. This study introduces a novel cross-modal network, ICH-PRNet, designed to predict ICH prognosis outcomes. Specifically, we propose a joint-attention interaction encoder that effectively integrates computed tomography images and clinical texts within a unified representational space. Additionally, we define a multi-loss function comprising three components to comprehensively optimize cross-modal fusion capabilities. To balance the training process, we employ a self-adaptive dynamic prioritization algorithm that adjusts the weights of each component, accordingly. Our model, through these innovative designs, establishes robust semantic connections between modalities and uncovers rich, complementary cross-modal information, thereby achieving superior prediction results. Extensive experimental results and comparisons with state-of-the-art methods on both in-house and publicly available datasets unequivocally demonstrate the superiority and efficacy of the proposed method. Our code is at https://github.com/YU-deep/ICH-PRNet.git.

Authors

  • Xinlei Yu
    School of Computer Science, Hangzhou Dianzi University, Hangzhou, 310018, China.
  • Ahmed Elazab
    Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen 518055, China; University of Chinese Academy of Sciences, 52 Sanlihe Road, Beijing 100864, China.
  • Ruiquan Ge
  • Jichao Zhu
    Department of Radiology, Longgang Central Hospital of Shenzhen, Shenzhen, 518116, China.
  • Lingyan Zhang
    Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University, Guangzhou, China.
  • Gangyong Jia
    College of Computer Science, Hangzhou Dianzi University, Hangzhou, China.
  • Qing Wu
    5 Department of Environmental and Occupational Health, School of Community Health Sciences, University of Nevada , Las Vegas, Nevada.
  • Xiang Wan
    Institute of Computational and Theoretical Study and Department of Computer Science, Hong Kong Baptist University, Hong Kong, P.R. China.
  • Lihua Li
    College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018, China. Electronic address: lilh@hdu.edu.cn.
  • Changmiao Wang
    Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Boulevard, Shenzhen 518055, China; University of Chinese Academy of Sciences, 52 Sanlihe Road, Beijing 100864, China.