Advancing BCI with a transformer-based model for motor imagery classification.

Journal: Scientific reports
Published Date:

Abstract

Brain-computer interfaces (BCIs) harness electroencephalographic signals for direct neural control of devices, offering significant benefits for individuals with motor impairments. Traditional machine learning methods for EEG-based motor imagery (MI) classification encounter challenges such as manual feature extraction and susceptibility to noise. This paper introduces EEGEncoder, a deep learning framework that employs modified transformers and Temporal Convolutional Networks (TCNs) to surmount these limitations. We propose a novel fusion architecture, named Dual-Stream Temporal-Spatial Block (DSTS), to capture temporal and spatial features, improving the accuracy of Motor Imagery classification task. Additionally, we use multiple parallel structures to enhance the model's performance. When tested on the BCI Competition IV-2a dataset, our proposed model achieved an average accuracy of 86.46% for subject dependent and average 74.48% for subject independent.

Authors

  • Wangdan Liao
    School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
  • Hongyun Liu
    Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, No. 19 Xin Jie Kou Wai Street, Beijing, 100875, China. hyliu@bnu.edu.cn.
  • Weidong Wang
    Zhejiang Huade New Materials Co., Ltd., Zhejiang Province, Hangzhou, China.