Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

5-2024

Abstract

Adversarial examples are manipulated samples used to deceive machine learning models, posing a serious threat in safety-critical applications. Existing safety certificates for machine learning models are limited to individual input examples, failing to capture generalization to unseen data. To address this limitation, we propose novel generalization bounds based on the PAC-Bayesian and randomized smoothing frameworks, providing certificates that predict the model’s performance and robustness on unseen test samples based solely on the training data. We present an effective procedure to train and compute the first non-vacuous generalization bounds for neural networks in adversarial settings. Experimental results on the widely recognized MNIST and CIFAR-10 datasets demonstrate the efficacy of our approach, yielding the first robust risk certificates for stochastic convolutional neural networks under the $L_2$ threat model. Our method offers valuable tools for evaluating model susceptibility to real-world adversarial risks.

Keywords

Bayesian, Generalisation, Generalization bound, Machine learning models, Neural-networks, Performance, Safety critical applications, Stochastic neural network, Test samples, Training data

Discipline

Databases and Information Systems | Data Storage Systems

Research Areas

Data Science and Engineering; Information Systems and Management

Publication

Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 2024 May 2-4

Volume

238

First Page

4528

Last Page

4536

Identifier

https://proceedings.mlr.press/v238/mustafa24a.html

Publisher

ML Research Press

City or Country

New York

Share

COinS