Deep learning based direct segmentation assisted by deformable image registration for cone-beam CT based auto-segmentation for adaptive radiotherapy.

Journal: Physics in medicine and biology
Published Date:

Abstract

Cone-beam CT (CBCT)-based online adaptive radiotherapy calls for accurate auto-segmentation to reduce the time cost for physicians. However, deep learning (DL)-based direct segmentation of CBCT images is a challenging task, mainly due to the poor image quality and lack of well-labelled large training datasets. Deformable image registration (DIR) is often used to propagate the manual contours on the planning CT (pCT) of the same patient to CBCT. In this work, we undertake solving the problems mentioned above with the assistance of DIR. Our method consists of three main components. First, we use deformed pCT contours derived from multiple DIR methods between pCT and CBCT as pseudo labels for initial training of the DL-based direct segmentation model. Second, we use deformed pCT contours from another DIR algorithm as influencer volumes to define the region of interest for DL-based direct segmentation. Third, the initially trained DL model is further fine-tuned using a smaller set of true labels. Nine patients are used for model evaluation. We found that DL-based direct segmentation on CBCT without influencer volumes has much poorer performance compared to DIR-based segmentation. However, adding deformed pCT contours as influencer volumes in the direct segmentation network dramatically improves segmentation performance, reaching the accuracy level of DIR-based segmentation. The DL model with influencer volumes can be further improved through fine-tuning using a smaller set of true labels, achieving mean Dice similarity coefficient of 0.86, Hausdorff distance at the 95th percentile of 2.34 mm, and average surface distance of 0.56 mm. A DL-based direct CBCT segmentation model can be improved to outperform DIR-based segmentation models by using deformed pCT contours as pseudo labels and influencer volumes for initial training, and by using a smaller set of true labels for model fine tuning.

Authors

  • Xiao Liang
    Beijing Advanced Innovation Center for Food Nutrition and Human Health, College of Veterinary Medicine, China Agricultural University, Beijing Key Laboratory of Detection Technology for Animal-Derived Food Safety, Beijing Laboratory for Food Quality and Safety, Beijing, 100193, People's Republic of China; College of Veterinary Medicine, Qingdao Agricultural University, Qingdao, 266109, People's Republic of China.
  • Howard Morgan
    Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology,University of Texas Southwestern Medical Center, Dallas, Texas, United States.
  • Ti Bai
  • Michael Dohopolski
    Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, United States of America.
  • Dan Nguyen
    University of Massachusetts Chan Medical School, Worcester, Massachusetts.
  • Steve Jiang