"Robust learning with probabilistic relaxation using hypothesis-test-ba" by Zilin WANG

Publication Type

Master Thesis

Version

publishedVersion

Publication Date

12-2024

Abstract

In recent years, deep learning has been a vital tool in various tasks. The performance of a neural network is usually evaluated by empirical risk minimization. However, robustness issues have gained great concern which can be fatal in safety-critical applications. Adversarial training can mitigate the issue by minimizing the loss of worst-case perturbations of data. It is effective in improving the robustness of the model, but it is too conservative, and the plain performance of the model can be unsatisfying. Probabilistic Robust Learning (PRL) empirically balances the average- and worst-case performance while the robustness of the model is not provable in most of the current work. This thesis proposes a novel approach for robust learning by sampling based on hypothesis testing. The approach guides the training to improve robustness in a highly efficient probabilistic robustness setting. It also enforces the robustness to be certified provably.

We evaluate our new framework by generating adversarial samples from several popular datasets and comparing the performance with other state-of-the-art works. The results of our approach illustrate a close performance on simple classification tasks and a better performance on more difficult tasks compared to the state-of-the-art works.

Keywords

AI Security, AI Robustness, Deep Learning

Degree Awarded

MSc in Applied Finance (SUFE)

Discipline

Artificial Intelligence and Robotics

Supervisor(s)

SUN, Jun

First Page

1

Last Page

33

Publisher

Singapore Management University

City or Country

Singapore

Copyright Owner and License

Author

Share

COinS