Publication Type

Journal Article

Version

acceptedVersion

Publication Date

3-2021

Abstract

Federated deep learning has been widely used in various fields. To protect data privacy, many privacy-preserving approaches have also been designed and implemented in various scenarios. However, existing works rarely consider a fundamental issue that the data shared by certain users (called irregular users) may be of low quality. Obviously, in a federated training process, data shared by many irregular users may impair the training accuracy, or worse, lead to the uselessness of the final model. In this paper, we propose PPFDL, a Privacy-Preserving Federated Deep Learning framework with irregular users. In specific, we design a novel solution to reduce the negative impact of irregular users on the training accuracy, which guarantees that the training results are mainly calculated from the contribution of high-quality data. Meanwhile, we exploit Yaos garbled circuits and additively homomorphic cryptosystems to ensure the confidentiality of all user-related information. Moreover, PPFDL is also robustness to users dropping out during the whole implementation. This means that each user can be offline at any subprocess of training, as long as the remaining online users can still complete the training task. Extensive experiments demonstrate the superior performance of PPFDL in terms of training accuracy, computation, and communication overheads.

Keywords

Collaborative learning, deep learning, privacy

Discipline

Information Security | Numerical Analysis and Scientific Computing

Research Areas

Cybersecurity

Publication

IEEE Transactions on Dependable and Secure Computing

Volume

19

Issue

2

First Page

1364

Last Page

1381

ISSN

1545-5971

Identifier

10.1109/TDSC.2020.3005909

Publisher

IEEE

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/TDSC.2020.3005909

Share

COinS