Improving breast mass classification by shared data with domain transformation using a generative adversarial network.

Journal: Computers in biology and medicine
Published Date:

Abstract

Training of a convolutional neural network (CNN) generally requires a large dataset. However, it is not easy to collect a large medical image dataset. The purpose of this study is to investigate the utility of synthetic images in training CNNs and to demonstrate the applicability of unrelated images by domain transformation. Mammograms showing 202 benign and 212 malignant masses were used for evaluation. To create synthetic data, a cycle generative adversarial network was trained with 599 lung nodules in computed tomography (CT) and 1430 breast masses on digitized mammograms (DDSM). A CNN was trained for classification between benign and malignant masses. The classification performance was compared between the networks trained with the original data, augmented data, synthetic data, DDSM images, and natural images (ImageNet dataset). The results were evaluated in terms of the classification accuracy and the area under the receiver operating characteristic curves (AUC). The classification accuracy improved from 65.7% to 67.1% with data augmentation. The use of an ImageNet pretrained model was useful (79.2%). Performance was slightly improved when synthetic images or the DDSM images only were used for pretraining (67.6 and 72.5%, respectively). When the ImageNet pretrained model was trained with the synthetic images, the classification performance slightly improved (81.4%), although the difference in AUCs was not statistically significant. The use of the synthetic images had an effect similar to the DDSM images. The results of the proposed study indicated that the synthetic data generated from unrelated lesions by domain transformation could be used to increase the training samples.

Authors

  • Chisako Muramatsu
    Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, 1-1 Yanagido, Gifu, Gifu 501-1194, Japan. Electronic address: chisa@fjt.info.gifu-u.ac.jp.
  • Mizuho Nishio
    Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Japan.
  • Takuma Goto
    Department of Intelligence Engineering, Graduate School of Natural Science and Technology, Gifu University, 1-1 Yanagido, Gifu, 501-1194, Japan.
  • Mikinao Oiwa
    Department of Radiology, Nagoya Medical Center, 4-1-1 Sannomaru, Naka-ku, Nagoya, Aichi, 460-0001, Japan.
  • Takako Morita
    Department of Breast Surgery, Nagoya Medical Center, 4-1-1 Sannomaru, Naka-ku, Nagoya, Aichi, 460-0001, Japan.
  • Masahiro Yakami
    Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Japan.
  • Takeshi Kubo
    Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Japan.
  • Kaori Togashi
    Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Japan.
  • Hiroshi Fujita
    Department of Intelligent Image Information, Division of Regeneration and Advanced Medical Sciences, Graduate School of Medicine, Gifu University, 1-1 Yanagido, Gifu 501-1194, Japan.