Enhanced electroencephalogram signal classification: A hybrid convolutional neural network with attention-based feature selection.
Journal:
Brain research
PMID:
39904453
Abstract
Accurate recognition and classification of motor imagery electroencephalogram (MI-EEG) signals are crucial for the successful implementation of brain-computer interfaces (BCI). However, inherent characteristics in original MI-EEG signals, such as nonlinearity, low signal-to-noise ratios, and large individual variations, present significant challenges for MI-EEG classification using traditional machine learning methods. To address these challenges, we propose an automatic feature extraction method rooted in deep learning for MI-EEG classification. First, original MI-EEG signals undergo noise reduction through discrete wavelet transform and common average reference. To reflect the regularity and specificity of brain neural activities, a convolutional neural network (CNN) is used to extract the time-domain features of MI-EEG. We also extracted spatial features to reflect the activity relationships and connection states of the brain in different regions. This process yields time series containing spatial information, focusing on enhancing crucial feature sequences through talking-heads attention. Finally, more abstract spatial-temporal features are extracted using a temporal convolutional network (TCN), and classification is done through a fully connected layer. Validation experiments based on the BCI Competition IV-2a dataset show that the enhanced EEG model achieves an impressive average classification accuracy of 85.53% for each subject. Compared with CNN, EEGNet, CNN-LSTM and EEG-TCNet, the classification accuracy of this model is improved by 11.24%, 6.90%, 11.18% and 6.13%, respectively. Our work underscores the potential of the proposed model to enhance intention recognition in MI-EEG significantly.