Transformative Deep Neural Network Approaches in Kidney Ultrasound Segmentation: Empirical Validation with an Annotated Dataset.

Journal: Interdisciplinary sciences, computational life sciences
Published Date:

Abstract

Kidney ultrasound (US) images are primarily employed for diagnosing different renal diseases. Among them, one is renal localization and detection, which can be carried out by segmenting the kidney US images. However, kidney segmentation from US images is challenging due to low contrast, speckle noise, fluid, variations in kidney shape, and modality artifacts. Moreover, well-annotated US datasets for renal segmentation and detection are scarce. This study aims to build a novel, well-annotated dataset containing 44,880 US images. In addition, we propose a novel training scheme that utilizes the encoder and decoder parts of a state-of-the-art segmentation algorithm. In the pre-processing step, pixel intensity normalization improves contrast and facilitates model convergence. The modified encoder-decoder architecture improves pyramid-shaped hole pooling, cascaded multiple-hole convolutions, and batch normalization. The pre-processing step gradually reconstructs spatial information, including the capture of complete object boundaries, and the post-processing module with a concave curvature reduces the false positive rate of the results. We present benchmark findings to validate the quality of the proposed training scheme and dataset. We applied six evaluation metrics and several baseline segmentation approaches to our novel kidney US dataset. Among the evaluated models, DeepLabv3+ performed well and achieved the highest dice, Hausdorff distance 95, accuracy, specificity, average symmetric surface distance, and recall scores of 89.76%, 9.91, 98.14%, 98.83%, 3.03, and 90.68%, respectively. The proposed training strategy aids state-of-the-art segmentation models, resulting in better-segmented predictions. Furthermore, the large, well-annotated kidney US public dataset will serve as a valuable baseline source for future medical image analysis research.

Authors

  • Rashid Khan
    College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China.
  • Chuda Xiao
    College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China.
  • Yang Liu
    Department of Computer Science, Hong Kong Baptist University, Hong Kong, China.
  • Jinyu Tian
    Wuerzburg Dynamics Inc., Shenzhen, 518188, China.
  • Zhuo Chen
    State Key Laboratory Breeding Base of Green Pesticide and Agricultural Bioengineering, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Guizhou University, Huaxi District, Guiyang 550025, China. Electronic address: gychenzhuo@aliyun.com.
  • Liyilei Su
    Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China.
  • Dan Li
    State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, PR China.
  • Haseeb Hassan
    College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China.
  • Haoyu Li
  • Weiguo Xie
    Wuerzburg Dynamics Inc., Shenzhen, 518188, China.
  • Wen Zhong
    Department of Chemical Engineering , Carnegie Mellon University , Pittsburgh , Pennsylvania 15217 , United States.
  • Bingding Huang
    College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China. Electronic address: huangbingding@sztu.edu.cn.