Enhanced model poisoning attack and multi-strategy defense in federated learning
Publication Type
Journal Article
Publication Date
3-2025
Abstract
As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.
Keywords
Federated learning, sign statistics, model poisoning attack
Discipline
Information Security
Research Areas
Information Systems and Management
Publication
IEEE Transactions on Information Forensics and Security
Volume
20
Issue
1
First Page
3877
Last Page
3892
ISSN
1556-6013
Identifier
10.1109/TIFS.2025.3555193
Publisher
Institute of Electrical and Electronics Engineers
Citation
YANG, Li; MIAO, Yinbin; LIU, Ziteng; LIU, Zhiquan; LI, Xinghua; KUANG, Da; LI, Hongwei; and DENG, Robert H..
Enhanced model poisoning attack and multi-strategy defense in federated learning. (2025). IEEE Transactions on Information Forensics and Security. 20, (1), 3877-3892.
Available at: https://ink.library.smu.edu.sg/sis_research/10448
Additional URL
https://doi.org/10.1109/TIFS.2025.3555193