Robust Transparency Against Model Inversion Attacks.

Journal: IEEE transactions on dependable and secure computing
Published Date:

Abstract

Transparency has become a critical need in machine learning (ML) applications. Designing transparent ML models helps increase trust, ensure accountability, and scrutinize fairness. Some organizations may opt-out of transparency to protect individuals' privacy. Therefore, there is a great demand for transparency models that consider both privacy and security risks. Such transparency models can motivate organizations to improve their credibility by making the ML-based decision-making process comprehensible to end-users. Differential privacy (DP) provides an important technique to disclose information while protecting individual privacy. However, it has been shown that DP alone cannot prevent certain types of privacy attacks against disclosed ML models. DP with low values can provide high privacy guarantees, but may result in significantly weaker ML models in terms of accuracy. On the other hand, setting value too high may lead to successful privacy attacks. This raises the question whether we can disclose accurate transparent ML models while preserving privacy. In this paper we introduce a novel technique that complements DP to ensure model transparency and accuracy while being robust against model inversion attacks. We show that combining the proposed technique with DP provide highly transparent and accurate ML models while preserving privacy against model inversion attacks.

Authors

  • Yasmeen Alufaisan
    EXPEC Computer Operations Department, Saudi Aramco, Dhahran 31311, Saudi Arabia.
  • Murat Kantarcioglu
    Department of Computer Science, University of Texas at Dallas, Richardson, Texas 75080, United States.
  • Yan Zhou
    Department of Computer Science, University of Texas at Dallas, Richardson, Texas 75080, United States.

Keywords

No keywords available for this article.