Noise-resistant sharpness-aware minimization in deep learning.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Sharpness-aware minimization (SAM) aims to enhance model generalization by minimizing the sharpness of the loss function landscape, leading to a robust model performance. To protect sensitive information and enhance privacy, prevailing approaches add noise to models. However, additive noises would inevitably degrade the generalization and robustness of the model. In this paper, we propose a noise-resistant SAM method, based on a noise-resistant parameter update rule. We analyze the convergence and noise resistance properties of the proposed method under noisy conditions. We elaborate on experimental results with several networks on various benchmark datasets to demonstrate the advantages of the proposed method with respect to model generalization and privacy protection.

Authors

  • Dan Su
    Tencent AI Lab, China. Electronic address: dansu@tencent.com.
  • Long Jin
  • Jun Wang
    Department of Speech, Language, and Hearing Sciences and the Department of Neurology, The University of Texas at Austin, Austin, TX 78712, USA.