SUnet: A multi-organ segmentation network based on multiple attention.

Journal: Computers in biology and medicine
Published Date:

Abstract

Organ segmentation in abdominal or thoracic computed tomography (CT) images plays a crucial role in medical diagnosis as it enables doctors to locate and evaluate organ abnormalities quickly, thereby guiding surgical planning, and aiding treatment decision-making. This paper proposes a novel and efficient medical image segmentation method called SUnet for multi-organ segmentation in the abdomen and thorax. SUnet is a fully attention-based neural network. Firstly, an efficient spatial reduction attention (ESRA) module is introduced not only to extract image features better, but also to reduce overall model parameters, and to alleviate overfitting. Secondly, SUnet's multiple attention-based feature fusion module enables effective cross-scale feature integration. Additionally, an enhanced attention gate (EAG) module is considered by using grouped convolution and residual connections, providing richer semantic features. We evaluate the performance of the proposed model on synapse multiple organ segmentation dataset and automated cardiac diagnostic challenge dataset. SUnet achieves an average Dice of 84.29% and 92.25% on these two datasets, respectively, outperforming other models of similar complexity and size, and achieving state-of-the-art results.

Authors

  • Xiaosen Li
    School of Artificial Intelligence, Guangxi Minzu University, Nanning, 530006, China; Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, 325105, China.
  • Xiao Qin
    Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA, 01609, USA.
  • Chengliang Huang
    Academy of Artificial Intelligence, Zhejiang Dongfang Polytechnic, Wenzhou, 325025, China.
  • Yuer Lu
    Wenzhou Institute and Wenzhou Key Laboratory of Biophysics, University of Chinese Academy of Sciences, Wenzhou, 325001, China.
  • Jinyan Cheng
    Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, 325105, China.
  • Liansheng Wang
    Department of Computer Science, Xiamen University, Xiamen 361005, China.
  • Ou Liu
    Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, 325105, China.
  • Jianwei Shuai
    Department of Physics, and Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen 361005, China; Wenzhou Institute, University of Chinese Academy of Sciences, and Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, Zhejiang 325001, China; National Institute for Data Science in Health and Medicine, School of Medicine, Xiamen University, Xiamen 361102, China. Electronic address: jianweishuai@xmu.edu.cn.
  • Chang-An Yuan
    Guangxi Key Lab of Human-machine Interaction and Intelligent Decision, Nanning Normal University, Nanning, 530023, China; Guangxi Academy of Science, Nanning, 530007, China. Electronic address: 68852917@qq.com.