DLPVI: Deep learning framework integrating projection, view-by-view backprojection, and image domains for high- and ultra-sparse-view CBCT reconstruction.
Journal:
Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
PMID:
39921927
Abstract
This study proposes a deep learning framework, DLPVI, which integrates projection, view-by-view backprojection (VVBP), and image domains to improve the quality of high-sparse-view and ultra-sparse-view cone-beam computed tomography (CBCT) images. The DLPVI comprises a projection domain sub-framework, a VVBP domain sub-framework, and a Transformer-based image domain model. First, full-view projections were restored from sparse-view projections via the projection domain sub-framework, then filtered and view-by-view backprojected to generate VVBP raw data. Next, the VVBP raw data was processed by the VVBP domain sub-framework to suppress residual noise and artifacts, and produce CBCT axial images. Finally, the axial images were further refined using the image domain model. The DLPVI was trained, validated, and tested on CBCT data from 163, 30, and 30 real patients respectively. Quantitative metrics including root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM) were calculated to evaluate the method performance. The DLPVI was compared with 15 state-of-the-art (SOTA) methods, including 2 projection domain models, 10 image domain models, and 3 projection-image dual-domain frameworks, on 1/8 high-sparse-view and 1/16 ultra-sparse-view reconstruction tasks. Statistical analysis was conducted using the Kruskal-Wallis test, followed by the post-hoc Dunn's test. Experimental results demonstrated that the DLPVI outperformed all 15 SOTA methods for both tasks, with statistically significant improvements (p < 0.05 in Kruskal-Wallis test and p < 0.05/15 in Dunn's test). The proposed DLPVI effectively improves the quality of high- and ultra-sparse-view CBCT images.