A deep learning-based multi-view approach to automatic 3D landmarking and deformity assessment of lower limb.
Journal:
Scientific reports
PMID:
39747979
Abstract
Anatomical Landmark detection in CT-Scan images is widely used in the identification of skeletal disorders. However, the traditional process of manually detecting anatomical landmarks, especially in three dimensions, is both time-consuming and prone to human errors. We propose a novel, deep-learning-based approach to automatic detection of 3D landmarks in CT images of the lower limb. We generate multiple view renderings of the scanned limb and then integrate them, using a pyramid-style convolutional neural network, to build a 3D model of the bone and to determine the spatial coordinates of the landmarks. Those landmarks are then used to calculate key anatomical indicators that would enable the reliable diagnosis of skeletal disorders. To evaluate the performance of the proposed approach we compare its predicted landmark coordinates and resulting anatomical indicators (both 2D and 3D) with those determined by human experts. The average coordinate error (difference between automatically and manually determined coordinates) of the landmarks was 2.05 ± 1.36 mm on test data, whereas the average angular error (difference between automatically and manually calculated angles in three and two dimensions) on the same dataset was 0.53 ± 0.66° and 0.74 ± 0.87°, respectively. Our proposed deep-learning-based approach not only outperforms the traditional landmark detection and indicator assessment methods in terms of speed and accuracy but also improves the credibility of the ensuing diagnoses by avoiding manual landmarking errors.