Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography.

Journal: Scientific reports
Published Date:

Abstract

Deep learning allows automatic segmentation of teeth on cone beam computed tomography (CBCT). However, the segmentation performance of deep learning varies among different training strategies. Our aim was to propose a 3.5D U-Net to improve the performance of the U-Net in segmenting teeth on CBCT. This study retrospectively enrolled 24 patients who received CBCT. Five U-Nets, including 2Da U-Net, 2Dc U-Net, 2Ds U-Net, 2.5Da U-Net, 3D U-Net, were trained to segment the teeth. Four additional U-Nets, including 2.5Dv U-Net, 3.5Dv5 U-Net, 3.5Dv4 U-Net, and 3.5Dv3 U-Net, were obtained using majority voting. Mathematical morphology operations including erosion and dilation (E&D) were applied to remove diminutive noise speckles. Segmentation performance was evaluated by fourfold cross validation using Dice similarity coefficient (DSC), accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV). Kruskal-Wallis test with post hoc analysis using Bonferroni correction was used for group comparison. P < 0.05 was considered statistically significant. Performance of U-Nets significantly varies among different training strategies for teeth segmentation on CBCT (P < 0.05). The 3.5Dv5 U-Net and 2.5Dv U-Net showed DSC and PPV significantly higher than any of five originally trained U-Nets (all P < 0.05). E&D significantly improved the DSC, accuracy, specificity, and PPV (all P < 0.005). The 3.5Dv5 U-Net achieved highest DSC and accuracy among all U-Nets. The segmentation performance of the U-Net can be improved by majority voting and E&D. Overall speaking, the 3.5Dv5 U-Net achieved the best segmentation performance among all U-Nets.

Authors

  • Kang Hsu
    Department of Dentistry, Tri-Service General Hospital, Taipei, Taiwan, Republic of China.
  • Da-Yo Yuh
    Department of Periodontology, School of Dentistry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, ROC.
  • Shao-Chieh Lin
    Department of Medical Imaging, China Medical University Hsinchu Hospital, Hsinchu, Taiwan, Republic of China.
  • Pin-Sian Lyu
    Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302, Hsinchu, Taiwan, ROC.
  • Guan-Xin Pan
    Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302, Hsinchu, Taiwan, ROC.
  • Yi-Chun Zhuang
    Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302, Hsinchu, Taiwan, ROC.
  • Chia-Ching Chang
    Department of Medical Imaging, China Medical University Hsinchu Hospital, Hsinchu, Taiwan, Republic of China.
  • Hsu-Hsia Peng
    Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu 300044, Taiwan.
  • Tung-Yang Lee
    Master's Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC.
  • Cheng-Hsuan Juan
    Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302, Hsinchu, Taiwan, ROC.
  • Cheng-En Juan
    Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724, Taichung, Taiwan, ROC.
  • Yi-Jui Liu
    Department of Automatic Control Engineering, Feng Chia University, Taichung, Taiwan.
  • Chun-Jung Juan
    Department of Medical Imaging, China Medical University Hsinchu Hospital, Hsinchu, Taiwan.