Multimodal attention fusion deep self-reconstruction presentation model for Alzheimer's disease diagnosis and biomarker identification.
Journal:
Artificial cells, nanomedicine, and biotechnology
Published Date:
Dec 1, 2025
Abstract
The unknown pathogenic mechanisms of Alzheimer's disease (AD) make treatment challenging. Neuroimaging genetics offers a method for identifying disease biomarkers for early diagnosis, but traditional approaches struggle with complex non-linear, multimodal and multi-expression data. However, traditional association analysis methods face challenges in handling nonlinear, multimodal and multi-expression data. Therefore, a multimodal attention fusion deep self-restructuring presentation (MAFDSRP) model is proposed to solve the above problem. First, multimodal brain imaging data are processed through a novel histogram-matching multiple attention mechanisms to dynamically adjust the weight of each input brain image data. Simultaneous, the genetic data are preprocessed to remove low-quality samples. Subsequently, the genetic data and fused neuroimaging data are separately input into the self-reconstruction network to learn the nonlinear relationships and perform subspace clustering at the top layer of the network. Finally, the learned genetic data and fused neuroimaging data are analysed through expression association analysis to identify AD-related biomarkers. The identified biomarkers underwent systematic multi-level analysis, revealing biomarker roles at molecular, tissue and functional levels, highlighting processes like inflammation, lipid metabolism, memory and emotional processing linked to AD. The experimental results show that MAFDSRP achieved 0.58 in association analysis, demonstrating its great potential in accurately identifying AD-related biomarkers.