A defense method against multi-label poisoning attacks in federated learning.

Journal: Scientific reports
Published Date:

Abstract

Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train models without sharing raw data. While it enhances data privacy, it is vulnerable to malicious attacks, especially data poisoning attacks like label flipping. Traditional defense mechanisms perform poorly against these complex and diverse attacks, particularly multi-label flipping attacks. In this paper, we propose a defense method against multi-label flipping attacks. The proposed method extracts gradients from the neurons in the output layer and applies clustering analysis to distinguish between benign and malicious participants with combinations of metrics. It can effectively identifies and filters out malicious updates, demonstrating strong robustness in defending against multi-label flipping attacks. Experimental results show that this method outperforms existing defenses in terms of both accuracy and robustness across multiple datasets, including MNIST, FashionMNIST, NSL-KDD, and CICIDS-2017, especially when faced with a high proportion of attackers and varied attack scenarios.

Authors

  • Wei Ma
    Institute of Urban Agriculture, Chinese Academy of Agricultural Sciences, Chengdu, China.
  • Qihang Zhao
    School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou, 450045, China.
  • Wenjun Tian
    School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou, 450045, China.

Keywords

No keywords available for this article.