Publication Type

Journal Article

Version

acceptedVersion

Publication Date

9-2023

Abstract

In this paper, we address the problem of privacy-preserving federated neural network training with N users. We present Hercules, an efficient and high-precision training framework that can tolerate collusion of up to N−1 users. Hercules follows the POSEIDON framework proposed by Sav et al. (NDSS’21), but makes a qualitative leap in performance with the following contributions: (i) we design a novel parallel homomorphic computation method for matrix operations, which enables fast Single Instruction and Multiple Data (SIMD) operations over ciphertexts. For the multiplication of two h×h dimensional matrices, our method reduces the computation complexity from O(h3) to O(h) . This greatly improves the training efficiency of the neural network since the ciphertext computation is dominated by the convolution operations; (ii) we present an efficient approximation on the sign function based on the composite polynomial approximation. It is used to approximate non-polynomial functions (i.e., ReLU and max ), with the optimal asymptotic complexity. Extensive experiments on various benchmark datasets (BCW, ESR, CREDIT, MNIST, SVHN, CIFAR-10 and CIFAR-100) show that compared with POSEIDON, Hercules obtains up to 4% increase in model accuracy, and up to 60× reduction in the computation and communication cost.

Keywords

federated learning; polynomial approximation; Privacy protection

Discipline

Information Security

Research Areas

Cybersecurity

Publication

IEEE Transactions on Dependable and Secure Computing

Volume

20

Issue

5

First Page

4418

Last Page

4433

ISSN

1545-5971

Identifier

10.1109/TDSC.2022.3218793

Publisher

Institute of Electrical and Electronics Engineers

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/TDSC.2022.3218793

Share

COinS