Publication Type
Master Thesis
Version
publishedVersion
Publication Date
7-2022
Abstract
Neural networks have been becoming essential parts in many safety-critical systems (such
as self-driving cars and medical diagnosis). Due to that, it is desirable that neural networks
not only have high accuracy (which traditionally can be validated using a test set) but also
satisfy some safety properties (such as robustness, fairness, or free of backdoor). To verify
neural networks against desired safety properties, there are many approaches developed
based on classical abstract interpretation. However, like in program verification, these
approaches suffer from false alarms, which may hinder the deployment of the networks.
One natural remedy to tackle the problem adopted from program verification community is counterexample-guided abstraction refinement (CEGAR). The application of CEGAR in neural network verification is, however, highly non-trivial due to the complication
raised from both neural networks and abstractions. In this thesis, we propose a method
to enhance abstract interpretation in verifying neural networks through an application
of CEGAR in two steps. First, we employ an optimization-based procedure to validate
abstractions along each propagation step and identify problematic abstraction via counterexample searching. Then, we leverage a causality approach to select the most likely
problematic components of the abstraction and refine them accordingly.
To evaluate our approach, we have implemented a prototype named DeepCause based on
DeepPoly and take local robustness as the target safety property to verify. The evaluation shows that our proposal can outperform DeepPoly and RefinePoly in all benchmark
networks. We should note that our idea is not limited to specific abstract domain and we
believe it is a promising step towards enhancing verification of complex neural network
systems.
Keywords
neural network, verification, abstraction refinement, CEGAR
Degree Awarded
MSc in Applied Finance (SUFE)
Discipline
OS and Networks | Software Engineering
Supervisor(s)
SUN, Jun
First Page
1
Last Page
44
Publisher
Singapore Management University
City or Country
Singapore
Citation
NGUYEN HUA GIA PHUC.
Deepcause: Verifying neural networks with abstraction refinement. (2022). 1-44.
Available at: https://ink.library.smu.edu.sg/etd_coll/447
Copyright Owner and License
Author
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.