A multi-scale information fusion medical image segmentation network based on convolutional kernel coupled updata mechanism.
Journal:
Computers in biology and medicine
Published Date:
Jan 28, 2025
Abstract
Medical image segmentation is pivotal in disease diagnosis and treatment. This paper presents a novel network architecture for medical image segmentation, termed TransDLNet, which is engineered to enhance the efficiency of multi-scale information utilization. TransDLNet integrates convolutional neural networks and Transformers, facilitating cross-level multi-scale information fusion for complex medical images. Key to its innovation is the attention-dilated depthwise convolution (ADDC) module, utilizing depthwise convolution (DWConv) with varied dilation rates to enhance local detail capture. A convolution kernel coupled update mechanism and channel information compensation method ensure robust feature representation. Furthermore, the cross-level grouped attention merge (CGAM) module in both encoder and decoder enhances feature interaction and integration across scales, boosting comprehensive representation. We conducted a comprehensive experimental analysis and quantitative evaluation on four datasets representing diverse modalities. The results indicate that the proposed method has good segmentation performance and generalization ability.