A dual self-attentive transformer U-Net model for precise pancreatic segmentation and fat fraction estimation.

Journal: BMC medical imaging
Published Date:

Abstract

Accurately segmenting the pancreas from abdominal computed tomography (CT) images is crucial for detecting and managing pancreatic diseases, such as diabetes and tumors. Type 2 diabetes and metabolic syndrome are associated with pancreatic fat accumulation. Calculating the fat fraction aids in the investigation of β-cell malfunction and insulin resistance. The most widely used pancreas segmentation technique is a U-shaped network based on deep convolutional neural networks (DCNNs). They struggle to capture long-range biases in an image because they rely on local receptive fields. This research proposes a novel dual Self-attentive Transformer Unet (DSTUnet) model for accurate pancreatic segmentation, addressing this problem. This model incorporates dual self-attention Swin transformers on both the encoder and decoder sides to facilitate global context extraction and refine candidate regions. After segmenting the pancreas using a DSTUnet, a histogram analysis is used to estimate the fat fraction. The suggested method demonstrated excellent performance on the standard dataset, achieving a DSC of 93.7% and an HD of 2.7 mm. The average volume of the pancreas was 92.42, and its fat volume fraction (FVF) was 13.37%.

Authors

  • Ashok Shanmugam
    Department of Electronics and Communication Engineering, Vel Tech Multi Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Chennai, Tamil Nadu, India.
  • Prianka Ramachandran Radhabai
    Department of AIML, New Horizon College of Engineering, Bangalore, Karnataka, India.
  • Kavitha Kvn
    Department of Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India.
  • Agbotiname Lucky Imoize
    Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Lagos, Akoka 100213, Lagos State, Nigeria.