[The accuracy and influencing factors of sleep staging based on single-channel EEG via a deep neural network].

Journal: Zhonghua er bi yan hou tou jing wai ke za zhi = Chinese journal of otorhinolaryngology head and neck surgery
Published Date:

Abstract

To investigate theaccuracy of artificial intelligence sleep staging model in patients with habitual snoring and obstructive sleep apnea hypopnea syndrome (OSAHS) based on single-channel EEG collected from different locations of the head. The clinical data of 114 adults with habitual snoring and OSAHS who visited to the Sleep Medicine Center of Beijing Tongren Hospital from September 2020 to March of 2021 were analyzed retrospectively, including 93 males and 21 females, aging from 20 to 64 years old. Eighty-five adults with OSAHS and 29 subjects with habitual snoring were included. Sleep staging analysis was performed on the single lead EEG signals of different locations (FP2-M1, C4-M1, F3-M2, ROG-M1, O1-M2) using the deep learning segmentation model trained by previous data. Manual scoring results were used as the gold standard to analyze the consistency rate of results and the influence of different categories of disease. EEG data in 124 747 30-second epochs were taken as the testing dataset. The model accuracy of distinguishing wake/sleep was 92.3%,92.6%,93.5%,89.2% and 83.0% respectively,based on EEG channel Fp2-M1, C4-M1, F3-M2, REOG-M1 or O1-M2. The mode accuracy of distinguishing wake/REM/NREM and wake/REM/N1-2/SWS , was 84.7% and 80.1% respectively based on channel Fp2-M1, which located in forehead skin. The AHI calculated based on total sleep time derived from the model and gold standard were 13.6[4.30,42.5] and 14.2[4.8,42.7], respectively (Z=-2.477, P=0.013), and the kappa coefficient was 0.977. The autonomic sleep staging via a deep neural network model based on forehead single-channel EEG (Fp2-M1) has a good consistency in the identification sleep stage in a population with habitual snoring and OSAHS with different categories. The AHI calculated based on this model has high consistency with manual scoring.

Authors

  • X Gao
  • Y R Li
    Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Key Laboratory of Otolaryngology Head and Neck Surgery, Ministry of Education (Capital Medical University), Beijing 100730, China.
  • G D Lin
    Department of Electronic Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.
  • M K Xu
    Department of Electronic Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.
  • X Q Zhang
    Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Key Laboratory of Otolaryngology Head and Neck Surgery, Ministry of Education (Capital Medical University), Beijing 100730, China.
  • Y H Shi
    Department of Otorhinolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Key Laboratory of Otorhinolaryngology Head and Neck Surgery(Capital Medical University), Ministry of Education, Beijing 100730,China.
  • W Xu
  • X J Wang
    Department of Radiology, Hebei Province Chest Hospital, Shijiazhuang 050041, China.
  • D M Han
    Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Key Laboratory of Otolaryngology Head and Neck Surgery, Ministry of Education (Capital Medical University), Beijing 100730, China.