Hybrid multi-modality multi-task learning for forecasting progression trajectories in subjective cognitive decline.
Journal:
Neural networks : the official journal of the International Neural Network Society
PMID:
39985974
Abstract
While numerous studies strive to exploit the complementary potential of MRI and PET using learning-based methods, the effective fusion of the two modalities remains a tricky problem due to their inherently distinctive properties. In addition, current studies often face the problem of small sample sizes and missing PET data due to factors such as patient withdrawal or low image quality. To this end, we propose a hybrid multi-modality multi-task learning (HML) framework with cross-domain knowledge transfer for forecasting trajectories of SCD progression. Our HML comprises (1) missing PET imputation, (2) multi-modality feature extraction for MRI and PET feature learning with a novel softmax-triplet constraint, (3) attention-based multi-modality fusion of MRI and PET features, and (4) multi-task prediction of category labels and clinical scores such as Mini-Mental State Examination (MMSE) and Geriatric Depression Scale (GDS). To handle problems with small sample sizes, a transfer learning strategy is developed to enable knowledge transfer from a relatively large scale dataset with MRI and PET from 795 subjects to two small-scale SCD cohorts with a total of 136 subjects. Experimental results indicate HML surpasses several state-of-the-art methods in jointly predicting category labels and clinical scores of subjective cognitive decline. Results show that the MMSE scores of SCD subjects who develop mild cognitive impairment during the 2-year/7-year follow-up are significantly lower than those of subjects who remain stable, while there exists a complex relationship between SCD progression with GDS.