Three simple steps to improve the interpretability of EEG-SVM studies.

Journal: Journal of neurophysiology
Published Date:

Abstract

Machine-learning systems that classify electroencephalography (EEG) data offer important perspectives for the diagnosis and prognosis of a wide variety of neurological and psychiatric conditions, but their clinical adoption remains low. We propose here that much of the difficulties translating EEG-machine-learning research to the clinic result from consistent inaccuracies in their technical reporting, which severely impair the interpretability of their often-high claims of performance. Taking example from a major class of machine-learning algorithms used in EEG research, the support-vector machine (SVM), we highlight three important aspects of model development (normalization, hyperparameter optimization, and cross-validation) and show that, while these three aspects can make or break the performance of the system, they are left entirely undocumented in a shockingly vast majority of the research literature. Providing a more systematic description of these aspects of model development constitute three simple steps to improve the interpretability of EEG-SVM research and, in fine, its clinical adoption.

Authors

  • Coralie Joucla
    Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive (LINC), Université de Bourgogne Franche-Comté, Besançon, France.
  • Damien Gabriel
    Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive (LINC), Université de Bourgogne Franche-Comté, Besançon, France.
  • Juan-Pablo Ortega
    Universität Sankt Gallen, Faculty of Mathematics and Statistics, Bodanstrasse 6, CH-9000 Sankt Gallen, Switzerland; Centre National de la Recherche Scientifique (CNRS), France. Electronic address: Juan-Pablo.Ortega@unisg.ch.
  • Emmanuel Haffen
    Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive (LINC), Université de Bourgogne Franche-Comté, Besançon, France.