Publication Type

Conference Proceeding Article

Version

acceptedVersion

Publication Date

3-2020

Abstract

Deep neural networks (DNN) are increasingly applied in safety-critical systems, e.g., for face recognition, autonomous car control and malware detection. It is also shown that DNNs are subject to attacks such as adversarial perturbation and thus must be properly tested. Many coverage criteria for DNN since have been proposed, inspired by the success of code coverage criteria for software programs. The expectation is that if a DNN is well tested (and retrained) according to such coverage criteria, it is more likely to be robust. In this work, we conduct an empirical study to evaluate the relationship between coverage, robustness and attack/defense metrics for DNN. Our study is the largest to date and systematically done based on 100 DNN models and 25 metrics. One of our findings is that there is limited correlation between coverage and robustness, i.e., improving coverage does not help improve the robustness. Our dataset and implementation have been made available to serve as a benchmark for future studies on testing DNN.

Keywords

Complex networks, Deep neural networks, Face recognition, Malware, Safety engineering, Statistical tests

Discipline

Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

2020 25th IEEE International Conference on Engineering of Complex Computer Systems, ICECCS: Singapore, March 4-6: Proceedings

First Page

73

Last Page

82

ISBN

9781728185583

Identifier

10.1109/ICECCS51672.2020.00016

Publisher

IEEE

City or Country

Piscataway, NJ

Embargo Period

5-17-2021

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/ICECCS51672.2020.00016

Share

COinS