Publication Type
Journal Article
Version
acceptedVersion
Publication Date
11-2022
Abstract
Neural networks are getting increasingly popular thanks to their exceptional performance in solving many real-world problems. At the same time, they are shown to be vulnerable to attacks, difficult to debug and subject to fairness issues. To improve people’s trust in the technology, it is often necessary to provide some human-understandable explanation of neural networks’ decisions, e.g., why is that my loan application is rejected whereas hers is approved? That is, the stakeholder would be interested to minimize the chances of not being able to explain the decision consistently and would like to know how often and how easy it is to explain the decisions of a neural network before it is deployed. In this work, we provide two measurements on the decision explainability of neural networks. Afterwards, we develop algorithms for evaluating the measurements of user-provided neural networks automatically. We evaluate our approach on multiple neural network models trained on benchmark datasets. The results show that existing neural networks’ decisions often have low explainability according to our measurements. This is in line with the observation that adversarial samples can be easily generated through adversarial perturbation, which are often hard to explain. Our further experiments show that the decisions of the models trained with robust training are not necessarily easier to explain, whereas decisions of the models retrained with samples generated by our algorithms are easier to explain.
Keywords
Deep learning models, Model interpretability, Neural network testing
Discipline
OS and Networks | Software Engineering
Research Areas
Software and Cyber-Physical Systems
Publication
Automated Software Engineering
Volume
29
Issue
2
First Page
1
Last Page
26
ISSN
0928-8910
Identifier
10.1007/s10515-022-00338-w
Publisher
Springer
Citation
ZHANG, Mengdi; SUN, Jun; and WANG, Jingyi.
Which neural network makes more explainable decisions? An approach towards measuring explainability. (2022). Automated Software Engineering. 29, (2), 1-26.
Available at: https://ink.library.smu.edu.sg/sis_research/7160
Copyright Owner and License
Authors
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1007/s10515-022-00338-w