Disentangled Representation Learning for Multiple Attributes Preserving Face Deidentification.

Journal: IEEE transactions on neural networks and learning systems
Published Date:

Abstract

Face is one of the most attractive sensitive information in visual shared data. It is an urgent task to design an effective face deidentification method to achieve a balance between facial privacy protection and data utilities when sharing data. Most of the previous methods for face deidentification rely on attribute supervision to preserve a certain kind of identity-independent utility but lose the other identity-independent data utilities. In this article, we mainly propose a novel disentangled representation learning architecture for multiple attributes preserving face deidentification called replacing and restoring variational autoencoders (RVAEs). The RVAEs disentangle the identity-related factors and the identity-independent factors so that the identity-related information can be obfuscated, while they do not change the identity-independent attribute information. Moreover, to improve the details of the facial region and make the deidentified face blends into the image scene seamlessly, the image inpainting network is employed to fill in the original facial region by using the deidentified face as a priori. Experimental results demonstrate that the proposed method effectively deidentifies face while maximizing the preservation of the identity-independent information, which ensures the semantic integrity and visual quality of shared images.

Authors

  • Maoguo Gong
  • Jialu Liu
  • Hao Li
    Department of Urology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
  • Yu Xie
    Department of Sociology, Princeton University, Princeton, New Jersey, USA.
  • Zedong Tang
    School of Electronic Engineering, Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, China.