A comprehensive survey of deep face verification systems adversarial attacks and defense strategies.

Journal: Scientific reports
Published Date:

Abstract

Face Verification (FV) systems have exhibited remarkable performance in verification tasks and have consequently garnered extensive adoption across various applications, from identity duplication to authentication in mobile payments. However, the surge in popularity of face verification has raised concerns about potential vulnerabilities in the face of adversarial attacks. These concerns originate from the fact that advanced FV systems, which rely on deep neural networks, have recently demonstrated susceptibility to crafted input samples known as adversarial examples. Although imperceptible to human observers, adversarial examples can deceive deep neural networks during the testing and deployment phases. These vulnerabilities raised significant concerns about the deployment of deep neural networks in safety-critical contexts, prompting extensive investigations into adversarial attacks and corresponding defense strategies. This comprehensive survey provides a comprehensive overview of recent advances in deep face verification, encompassing a broad spectrum of topics such as algorithmic designs, database utilization, protocols, and application scenarios. Furthermore, we conduct an in-depth examination of state-of-the-art algorithms to generate adversarial examples and the defense mechanisms devised to mitigate such adversarial threats.

Authors

  • Sohair Kilany
    Computer Science Department, Faculty of Science, Minia University, Al Minya, Egypt. sohair_kilany@mu.edu.eg.
  • Ahmed Mahfouz
    Department of Human Genetics, Leiden University Medical Center, Leiden, The Netherlands. a.mahfouz@lumc.nl.