Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
12-2023
Abstract
Recent advancements in deep learning have spotlighted a crucial privacy vulnerability to membership inference attack (MIA), where adversaries can determine if specific data was present in a training set, thus potentially revealing sensitive information. In this paper, we introduce a technique, weighted smoothing (WS), to mitigate MIA risks. Our approach is anchored on the observation that training samples differ in their vulnerability to MIA, primarily based on their distance to clusters of similar samples. The intuition is clusters will make model predictions more confident and increase MIA risks. Thus WS strategically introduces noise to training samples, depending on whether they are near a cluster or isolated. We evaluate WS against MIAs on multiple benchmark datasets and model architectures, demonstrating its effectiveness. We publish code at https://github.com/BennyTMT/weighted-smoothing.
Discipline
Information Security
Research Areas
Cybersecurity
Publication
ACSAC '23: Proceedings of the 39th Annual Computer Security Applications Conference, Austin, December 4
First Page
787
Last Page
798
ISBN
9798400708862
Identifier
10.1145/3627106.3627189
Publisher
ACM
City or Country
New York
Citation
TAN, Minghan; XIE, Xiaofei; SUN, Jun; and WANG, Tianhao.
Mitigating membership inference attacks via weighted smoothing. (2023). ACSAC '23: Proceedings of the 39th Annual Computer Security Applications Conference, Austin, December 4. 787-798.
Available at: https://ink.library.smu.edu.sg/sis_research/8613
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.
Additional URL
https://doi.org/10.1145/3627106.3627189