A top-down manner-based DCNN architecture for semantic image segmentation.

Journal: PloS one
Published Date:

Abstract

Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.

Authors

  • Kai Qiao
    National Digital Switching System Engineering and Technological Research Centre, Zhengzhou, China.
  • Jian Chen
    School of Pharmacy, Shanghai Jiaotong University, Shanghai, China.
  • Linyuan Wang
    National Digital Switching System Engineering and Technological Research Centre, Zhengzhou, China.
  • Lei Zeng
    School of Chemical and Environmental Engineering, Hubei Minzu University, Enshi 445000, China.
  • Bin Yan
    National Digital Switching System Engineering and Technological Research Centre, Zhengzhou, China.