Flexible Patched Brain Transformer model for EEG decoding.

Journal: Scientific reports
Published Date:

Abstract

Decoding the human brain using non-invasive methods is a significant challenge. This study aims to enhance electroencephalography (EEG) decoding by developing of machine learning methods. Specifically, we propose the novel, attention-based Patched Brain Transformer model to achieve this goal. The model exhibits flexibility regarding the number of EEG channels and recording duration, enabling effective pre-training across diverse datasets. We investigate the effect of data augmentation methods and pre-training on the training process. To gain insights into the training behavior, we incorporate an inspection of the architecture. We compare our model with state-of-the-art models and demonstrate superior performance using only a fraction of the parameters. The results are achieved with supervised pre-training, coupled with time shifts as data augmentation for multi-participant classification on motor imagery datasets.

Authors

  • Timon Klein
    Department of Mathematics, Otto-von-Guericke University Magdeburg, 39106, Magdeburg, Germany. timon.klein@ovgu.de.
  • Piotr Minakowski
    Department of Mathematics, Otto-von-Guericke University Magdeburg, 39106, Magdeburg, Germany.
  • Sebastian Sager
    Department of Mathematics, Otto-von-Guericke University, Magdeburg, Germany.