A dual self-attentive transformer U-Net model for precise pancreatic segmentation and fat fraction estimation.
Journal:
BMC medical imaging
Published Date:
Aug 4, 2025
Abstract
Accurately segmenting the pancreas from abdominal computed tomography (CT) images is crucial for detecting and managing pancreatic diseases, such as diabetes and tumors. Type 2 diabetes and metabolic syndrome are associated with pancreatic fat accumulation. Calculating the fat fraction aids in the investigation of β-cell malfunction and insulin resistance. The most widely used pancreas segmentation technique is a U-shaped network based on deep convolutional neural networks (DCNNs). They struggle to capture long-range biases in an image because they rely on local receptive fields. This research proposes a novel dual Self-attentive Transformer Unet (DSTUnet) model for accurate pancreatic segmentation, addressing this problem. This model incorporates dual self-attention Swin transformers on both the encoder and decoder sides to facilitate global context extraction and refine candidate regions. After segmenting the pancreas using a DSTUnet, a histogram analysis is used to estimate the fat fraction. The suggested method demonstrated excellent performance on the standard dataset, achieving a DSC of 93.7% and an HD of 2.7 mm. The average volume of the pancreas was 92.42, and its fat volume fraction (FVF) was 13.37%.