A deep learning model combining convolutional neural networks and a selective kernel mechanism for SSVEP-Based BCIs.
Journal:
Computers in biology and medicine
Published Date:
Jul 1, 2025
Abstract
Existing deep learning methods for brain-computer interfaces (BCIs) based on steady-state visually evoked potential (SSVEP) face several challenges, such as overfitting when training data are insufficient, and the difficulty of effectively capturing global temporal features due to limited receptive fields. To address these challenges, we propose a novel deep learning model, FBCNN-TKS, which extracts harmonic components from SSVEP signals using a filter bank technique, followed by feature extraction through convolutional neural networks (CNNs) and a temporal kernel selection (TKS) module, and finally the weighted sum of cross-entropy loss and center loss is used as the objective function for model optimization. The key innovation of our approach lies in the introduction of the TKS module, which significantly enhances feature extraction capability by providing a broader receptive field. Additionally, dilated and grouped convolutions are used in TKS module to reduce the number of model parameters, minimizing the risk of overfitting and improving classification accuracy. Experimental results manifest that FBCNN-TKS outperforms state-of-the-art methods in terms of classification accuracy and information transfer rate (ITR). Specifically, FBCNN-TKS achieved the highest ITRs of 251.54 bpm and 203.47 bpm with the highest accuracies of 83.10 % and 72.98 % on public datasets Benchmark and BETA respectively at the data length of 0.4s, exhibiting superior performance. The FBCNN-TKS model bears big potential for the development of high-performance SSVEP-BCI character spelling systems.