Progressive multi-task learning for fine-grained dental implant classification and segmentation in CBCT image.

Journal: Computers in biology and medicine
PMID:

Abstract

With the ongoing advancement of digital technology, oral medicine transitions from traditional diagnostics to computer-assisted diagnosis and treatment. Identifying dental implants in patients without records is complex and time-consuming. Accurate identification of dental implants is crucial for ensuring the sustainability and reliability of implant treatment, particularly in cases where patients lack available medical records. In this paper, we propose a multi-task fine-grained CBCT dental implant classification and segmentation method using deep learning, called MFPT-Net.This method, based on progressive training with multiscale feature extraction and enhancement, can differentiate minor implant features and similar features that are easily confused, such as implant threads. It addresses the problem of large intra-class differences and small inter-class differences of implants, achieving automatic, synchronized classification and segmentation of implant systems in CBCT images. In this paper, 437 CBCT sequences with 723 dental implants, acquired from three different centers, are included in our dataset. This dataset is the first instance of utilizing such a comprehensive collection of data for CBCT analysis. Our method achieved a satisfying classification result with accuracy of 92.98%, average precision of 93.15%, average recall of 93.31%, and average F1 score of 93.18%, which exceeded the second-best model by nearly 10%. Moreover, our segmentation Dice similarity coefficient reached 98.04%, which is significantly better than the current state-of-the-art method. External clinical validation with 252 implants proved our model's clinical feasibility. The result demonstrates that our proposed method could assist dentists with dental implant classification and segmentation in CBCT images, enhancing efficiency and accuracy in clinical practice.

Authors

  • Yue Zhao
    The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China.
  • Lanying Zhu
    School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
  • Wendi Wang
    School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
  • Longwei Lv
    Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, 100081, China; National Center of Stomatology, National Clinical Research Center for Oral Disease, National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing Key Laboratory of Digital Stomatology, Beijing, 100081, China.
  • Qiang Li
    Department of Dermatology, Air Force Medical Center, PLA, Beijing, People's Republic of China.
  • Yang Liu
    Department of Computer Science, Hong Kong Baptist University, Hong Kong, China.
  • Jiang Xi
    Department of Oral Implantology, Peking University School and Hospital of Stomatology, Beijing, 100081, China; National Center of Stomatology, National Clinical Research Center for Oral Disease, National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing Key Laboratory of Digital Stomatology, Beijing, 100081, China. Electronic address: jiangxi2003@bjmu.edu.cn.
  • Chun Yi
    Department of Oral Implantology, Peking University School and Hospital of Stomatology, Beijing, 100081, China; National Center of Stomatology, National Clinical Research Center for Oral Disease, National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing Key Laboratory of Digital Stomatology, Beijing, 100081, China. Electronic address: yichun@bjmu.edu.cn.