Using a dual-stream attention neural network to characterize mild cognitive impairment based on retinal images.

Journal: Computers in biology and medicine
Published Date:

Abstract

Mild cognitive impairment (MCI) is a critical transitional stage between normal cognition and dementia, for which early detection is crucial for timely intervention. Retinal imaging has been shown as a promising potential biomarker for MCI. This study aimed to develop a dual-stream attention neural network to classify individuals with MCI based on multi-modal retinal images. Our approach incorporated a cross-modality fusion technique, a variable scale dense residual model, and a multi-classifier mechanism within the dual-stream network. The model utilized a residual module to extract image features and employed a multi-level feature aggregation method to capture complex context information. Self-attention and cross-attention modules were utilized at each convolutional layer to fuse features from optical coherence tomography (OCT) and fundus modalities, resulting in multiple output losses. The neural network was applied to classify individuals with MCI, Alzheimer's disease, and control participants with normal cognition. Through fine-tuning the pre-trained model, we classified community-dwelling participants into two groups based on cognitive impairment test scores. To identify retinal imaging biomarkers associated with accurate prediction, we used the Gradient-weighted Class Activation Mapping technique. The proposed method achieved high precision rates of 84.96% and 80.90% in classifying MCI and positive test scores for cognitive impairment, respectively. Notably, changes in the optic nerve head on fundus photographs or OCT images among patients with MCI were not used to discriminate patients from the control group. These findings demonstrate the potential of our approach in identifying individuals with MCI and emphasize the significance of retinal imaging for early detection of cognitive impairment.

Authors

  • Hebei Gao
    School of Artificial Intelligence, Wenzhou Polytechnic, Wenzhou, 325035, China; Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325000, China.
  • Shuaiye Zhao
    College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, China.
  • Gu Zheng
    Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325000, China.
  • Xinmin Wang
  • Runyi Zhao
    Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325000, China.
  • Zhigeng Pan
    School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing, 210044, China.
  • Hong Li
    Department of Public Health Sciences, Medical College of South Carolina, Charleston, SC.
  • Fan Lü
    Institute of Waste Treatment & Reclamation, College of Environmental Science and Engineering, Tongji University, Shanghai 200092, China; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, China.
  • Meixiao Shen
    Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325000, China. Electronic address: smx77@mail.eye.ac.cn.