Pixel-wise body composition prediction with a multi-task conditional generative adversarial network.

Journal: Journal of biomedical informatics
Published Date:

Abstract

The analysis of human body composition plays a critical role in health management and disease prevention. However, current medical technologies to accurately assess body composition such as dual energy X-ray absorptiometry, computed tomography, and magnetic resonance imaging have the disadvantages of prohibitive cost or ionizing radiation. Recently, body shape based techniques using body scanners and depth cameras, have brought new opportunities for improving body composition estimation by intelligently analyzing body shape descriptors. In this paper, we present a multi-task deep neural network method utilizing a conditional generative adversarial network to predict the pixel level body composition using only 3D body surfaces. The proposed method can predict 2D subcutaneous and visceral fat maps in a single network with a high accuracy. We further introduce an interpreted patch discriminator which optimizes the texture accuracy of the 2D fat maps. The validity and effectiveness of our new method are demonstrated experimentally on TCIA and LiTS datasets. Our proposed approach outperforms competitive methods by at least 41.3% for the whole body fat percentage, 33.1% for the subcutaneous and visceral fat percentage, and 4.1% for the regional fat predictions.

Authors

  • Qiyue Wang
    George Washington University.
  • Wu Xue
    Department of Statistics, The George Washington University, USA.
  • Xiaoke Zhang
    George Washington University.
  • Fang Jin
    Department of Statistics, The George Washington University, USA.
  • James Hahn
    George Washington University.