Automatic sex estimation using deep convolutional neural network based on orthopantomogram images.

Journal: Forensic science international
Published Date:

Abstract

Sex estimation is very important in forensic applications as part of individual identification. Morphological sex estimation methods predominantly focus on anatomical measurements. Based on the close relationship between sex chromosome genes and facial characterization, craniofacial hard tissues morphology shows sex dimorphism. In order to establish a more labor-saving, rapid, and accurate reference for sex estimation, the study investigated a deep learning network-based artificial intelligence (AI) model using orthopantomograms (OPG) to estimate sex in northern Chinese subjects. In total, 10703 OPG images were divided into training (80%), validation (10%), and test sets (10%). At the same time, different age thresholds were selected to compare the accuracy differences between adults and minors. The accuracy of sex estimation using CNN (convolutional neural network) model was higher for adults (90.97%) compared with minors (82.64%). This work demonstrated that the proposed model trained with a large dataset could be used in automatic morphological sex-related identification with favorable performance and practical significance in forensic science for adults in northern China, while also providing a reference for minors to some extent.

Authors

  • Wen-Qing Bu
    Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China; Department of Orthodontics, Stomatological Hospital of Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China.
  • Yu-Xin Guo
    Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China.
  • Dong Zhang
    Institute of Acoustics, Nanjing University, Nanjing 210093, China.
  • Shao-Yi Du
    National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, People's Republic of China.
  • Meng-Qi Han
    Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China.
  • Zi-Xuan Wu
    Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China; Department of Orthodontics, Stomatological Hospital of Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China.
  • Yu Tang
    School of Information Science and Engineering, Central South University, Changsha 410083, China. tangyu@csu.edu.cn.
  • Teng Chen
    College of Medicine and Forensics, Xi'an Jiaotong University Health Science Center, 76 West Yanta Road, Xi'an, 710004, Shaanxi, People's Republic of China.
  • Yu-Cheng Guo
    Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an, 710004, Shaanxi, People's Republic of China.
  • Hao-Tian Meng
    Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, 98 XiWu Road, Xi'an 710004, Shaanxi, People's Republic of China. Electronic address: menghaotian0803@126.com.