Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

4-2018

Abstract

Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive—new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community’s understanding of the nature and extent of these vulnerabilities remains limited. We systematize recent findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation. Through these explorations, we show that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.

Discipline

Information Security | Theory and Algorithms

Research Areas

Data Science and Engineering

Publication

Proceedings of 3rd IEEE European Symposium on Security and Privacy (EuroS&P), London, 2018 April 24-26

First Page

1

Last Page

19

Identifier

10.1109/EuroSP.2018.00035

Publisher

IEEE

City or Country

London, United Kingdom

Additional URL

https://doi.org/10.1109/EuroSP.2018.00035

Share

COinS