ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
Publication Type
Journal Article
Publication Date
4-2022
Abstract
Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning paradigm that aggregates user-trained local gradients into a federated model through a cryptographic protocol. Unfortunately, PPFL is vulnerable to model poisoning attacks launched by a Byzantine adversary, who crafts malicious local gradients to harm the accuracy of the federated model. To resist model poisoning attacks, existing defense strategies focus on identifying suspicious local gradients over plaintexts. However, the Byzantine adversary submits encrypted poisonous gradients to circumvent existing defense strategies in PPFL, resulting in encrypted model poisoning. To address the issue, in this paper we design a privacy-preserving defense strategy using two-trapdoor homomorphic encryption (referred to as ShieldFL), which can resist encrypted model poisoning without compromising privacy in PPFL. Specially, we first present the secure cosine similarity method aiming to measure the distance between two encrypted gradients. Then, we propose the Byzantine-tolerance aggregation using cosine similarity, which can achieve robustness for both Independently Identically Distribution (IID) and non-IID data. Extensive evaluations on three benchmark datasets (i.e., MNIST, KDDCup99, and Amazon) show that ShieldFL outperforms existing defense strategies. Especially, ShieldFL can achieve 30%-80% accuracy improvement to defend two state-of-the-art model poisoning attacks in both non-IID and IID settings.
Keywords
Cryptography, Data models, Privacy, Computational modeling, Servers, Data privacy, Homomorphic encryption, Privacy-preserving, homomorphic encryption, defense strategy, model poisoning attack, federated learning
Discipline
Databases and Information Systems | Information Security
Research Areas
Cybersecurity
Publication
IEEE Transactions on Information Forensics and Security
Volume
17
First Page
1639
Last Page
1654
ISSN
1556-6013
Identifier
10.1109/TIFS.2022.3169918
Publisher
Institute of Electrical and Electronics Engineers
Citation
MA, Zhuoran; MA, Jianfeng; MIAO, Yinbin; LI, Yingjiu; and DENG, Robert H..
ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning. (2022). IEEE Transactions on Information Forensics and Security. 17, 1639-1654.
Available at: https://ink.library.smu.edu.sg/sis_research/7252
Additional URL
https://doi.org/10.1109/TIFS.2022.3169918