RFed: Robustness-Enhanced Privacy-Preserving Federated Learning against poisoning attack

Publication Type

Journal Article

Publication Date

1-2024

Abstract

Federated learning not only realizes collaborative training of models, but also effectively maintains user privacy. However, with the widespread application of privacy-preserving federated learning, poisoning attacks threaten the model utility. Existing defense schemes suffer from a series of problems, including low accuracy, low robustness and reliance on strong assumptions, which limit the practicability of federated learning. To solve these problems, we propose a Robustness-enhanced privacy-preserving Federated learning with scaled dot-product attention (RFed) under dual-server model. Specifically, we design a highly robust defense mechanism that uses a dual-server model instead of traditional single-server model to significantly improve model accuracy and completely eliminate the reliance on strong assumptions. Formal security analysis proves that our scheme achieves convergence and provides privacy protection, and extensive experiments demonstrate that our scheme reduces high computational overhead while guaranteeing privacy preservation and model accuracy, and ensures that the failure rate of poisoning attacks is higher than 96%.

Keywords

Computational modeling, Federated learning, poisoning attack, Privacy, privacy protection, Robustness, scaled dot-product attention mechanism, Security, Servers, Training

Discipline

Information Security

Research Areas

Cybersecurity

Publication

IEEE Transactions on Information Forensics and Security

First Page

1

ISSN

1556-6013

Identifier

10.1109/TIFS.2024.3402113

Publisher

Institute of Electrical and Electronics Engineers

Additional URL

https://doi.org/10.1109/TIFS.2024.3402113

This document is currently not available here.

Share

COinS