Synthetic CT generation based on CBCT using improved vision transformer CycleGAN.

Journal: Scientific reports
Published Date:

Abstract

Cone-beam computed tomography (CBCT) is a crucial component of adaptive radiation therapy; however, it frequently encounters challenges such as artifacts and noise, significantly constraining its clinical utility. While CycleGAN is a widely employed method for CT image synthesis, it has notable limitations regarding the inadequate capture of global features. To tackle these challenges, we introduce a refined unsupervised learning model called improved vision transformer CycleGAN (IViT-CycleGAN). Firstly, we integrate a U-net framework that builds upon ViT. Next, we augment the feed-forward neural network by incorporating deep convolutional networks. Lastly, we enhance the stability of the model training process by introducing gradient penalty and integrating an additional loss term into the generator loss. The experiment demonstrates from multiple perspectives that our model-generated synthesizing CT(sCT) has significant advantages compared to other unsupervised learning models, thereby validating the clinical applicability and robustness of our model. In future clinical practice, our model has the potential to assist clinical practitioners in formulating precise radiotherapy plans.

Authors

  • Yuxin Hu
    Department of Electrical Engineering, Stanford University, Stanford, CA, United States.
  • Han Zhou
    Jiangsu Provincial Key Laboratory of Special Robot Technology, Hohai University, Changzhou, China.
  • Ning Cao
    Blood Purification Center, General Hospital of Shenyang Military Area Command, Shenyang, China.
  • Can Li
    Department of Chemical Engineering, Tsinghua University , Beijing 100084, China.
  • Can Hu
    Department of Urology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China.