Evaluating masked self-supervised learning frameworks for 3D dental model segmentation tasks.
Journal:
Scientific reports
PMID:
40368972
Abstract
The application of deep learning using dental models is crucial for automated computer-aided treatment planning. However, developing highly accurate models requires a substantial amount of accurately labeled data. Obtaining this data is challenging, especially in the medical domain. Masked self-supervised learning has shown great promise in overcoming the challenge of data scarcity. However, its effectiveness has not been well explored in the 3D domain, particularly on dental models. In this work, we investigate the applicability of the four recently published masked self-supervised learning frameworks-Point-BERT, Point-MAE, Point-GPT, and Point-M2AE-for improving downstream tasks such as tooth and brace segmentation. These frameworks were pre-trained on a proprietary dataset of over 4000 unlabeled 3D dental models and fine-tuned using the publicly available Teeth3DS dataset for tooth segmentation and a self-constructed braces segmentation dataset. Through a set of experiments we demonstrate that pre-training can enhance the performance of downstream tasks, especially when training data is scarce or imbalanced-a critical factor for clinical usability. Our results show that the benefits are most noticeable when training data is limited but diminish as more labeled data becomes available, providing insights into when and how this technique should be applied to maximize its effectiveness.