GP-CNN-DTEL: Global-Part CNN Model With Data-Transformed Ensemble Learning for Skin Lesion Classification.

Journal: IEEE journal of biomedical and health informatics
Published Date:

Abstract

Precise skin lesion classification is still challenging due to two problems, i.e., (1) inter-class similarity and intra-class variation of skin lesion images, and (2) the weak generalization ability of single Deep Convolutional Neural Network trained with limited data. Therefore, we propose a Global-Part Convolutional Neural Network (GP-CNN) model, which treats the fine-grained local information and global context information with equal importance. The Global-Part model consists of a Global Convolutional Neural Network (G-CNN) and a Part Convolutional Neural Network (P-CNN). Specifically, the G-CNN is trained with downscaled dermoscopy images, and is used to extract the global-scale information of dermoscopy images and produce the Classification Activation Map (CAM). While the P-CNN is trained with the CAM guided cropped image patches and is used to capture local-scale information of skin lesion regions. Additionally, we present a data-transformed ensemble learning strategy, which can further boost the classification performance by integrating the different discriminant information from GP-CNNs that are trained with original images, color constancy transformed images, and feature saliency transformed images, respectively. The proposed method is evaluated on the ISIC 2016 and ISIC 2017 Skin Lesion Challenge (SLC) classification datasets. Experimental results indicate that the proposed method can achieve the state-of-the-art skin lesion classification performance (i.e., an AP value of 0.718 on the ISIC 2016 SLC dataset and an Average Auc value of 0.926 on the ISIC 2017 SLC dataset) without any external data, compared with other current methods which need to use external data.

Authors

  • Peng Tang
  • Qiaokang Liang
  • Xintong Yan
    College of Electrical and Information Engineering, Hunan University, Changsha, 410082, China; National Engineering Laboratory for Robot Vision Perception and Control Technologies, Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Changsha, 410082, China. Electronic address: yxt@hnu.edu.cn.
  • Shao Xiang
    College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control technologies, Hunan University, Changsha 410082, China.
  • Dan Zhang
    School of Pharmacy, Southwest Medical University, Luzhou 646000, China.