Dynamic Hierarchical Convolutional Attention Network for Recognizing Motor Imagery Intention.
Journal:
IEEE transactions on cybernetics
PMID:
40131750
Abstract
The neural activity patterns of localized brain regions are crucial for recognizing brain intentions. However, existing electroencephalogram (EEG) decoding models, especially those based on deep learning, predominantly focus on global spatial features, neglecting valuable local information, potentially leading to suboptimal performance. Therefore, this study proposed a dynamic hierarchical convolutional attention network (DH-CAN) that comprehensively learned discriminative information from both global and local spatial domains, as well as from time-frequency domains in EEG signals. Specifically, a multiscale convolutional block was designed to dynamically capture time-frequency information. The channels of EEG signals were mapped to different brain regions based on motor imagery neural activity patterns. The spatial features, both global and local, were then hierarchically extracted to fully exploit the discriminative information. Furthermore, regional connectivity was established using a graph attention network, incorporating it into the local spatial features. Particularly, this study shared network parameters between symmetrical brain regions to better capture asymmetrical motor imagery patterns. Finally, the learned multilevel features were integrated through a high-level fusion layer. Extensive experimental results on two datasets demonstrated that the proposed model performed excellently across multiple evaluation metrics, exceeding existing benchmark methods. These findings suggested that the proposed model offered a novel perspective for EEG decoding research.