Publication Type

Conference Proceeding Article

Version

publishedVersion

Publication Date

12-2020

Abstract

Outsourced inference service has enormously promoted the popularity of deep learning, and helped users to customize a range of personalized applications. However, it also entails a variety of security and privacy issues brought by untrusted service providers. Particularly, a malicious adversary may violate user privacy during the inference process, or worse, return incorrect results to the client through compromising the integrity of the outsourced model. To address these problems, we propose SecureDL to protect the model’s integrity and user’s privacy in Deep Neural Networks (DNNs) inference process. In SecureDL, we first transform complicated non-linear activation functions of DNNs to low-degree polynomials. Then, we give a novel method to generate sensitive-samples, which can verify the integrity of a model’s parameters outsourced to the server with high accuracy. Finally, We exploit Leveled Homomorphic Encryption (LHE) to achieve the privacy-preserving inference. We shown that our sensitive-samples are indeed very sensitive to model changes, such that even a small change in parameters can be reflected in the model outputs. Based on the experiments conducted on real data and different types of attacks, we demonstrate the superior performance of SecureDL in terms of detection accuracy, inference accuracy, computation, and communication overheads.

Keywords

Deep learning, Privacy protection, Variable inference, Network security

Discipline

Information Security

Research Areas

Cybersecurity

Publication

ACSAC '20: Proceedings of the 36th Annual Computer Security Applications Conference, Virtual, December 7-11

First Page

784

Last Page

797

ISBN

9781450388580

Identifier

10.1145/3427228.3427232

Publisher

ACM

City or Country

New York

Embargo Period

5-6-2021

Copyright Owner and License

Publisher

Additional URL

https://doi.org/10.1145/3427228.3427232

Share

COinS