Publication Type

Journal Article

Version

publishedVersion

Publication Date

2-2021

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples which are generated by inputs with imperceptible perturbations. Understanding adversarial robustness of DNNs has become an important issue, which would for certain result in better practical deep learning applications. To address this issue, we try to explain adversarial robustness for deep models from a new perspective of critical attacking route, which is computed by a gradient-based influence propagation strategy. Similar to rumor spreading in social net-works, we believe that adversarial noises are amplified and propagated through the critical attacking route. By exploiting neurons' influences layer by layer, we compose the critical attacking route with neurons that make the highest contributions towards model decision. In this paper, we first draw the close connection between adversarial robustness and critical attacking route, as the route makes the most non-trivial contributions to model predictions in the adversarial setting. By constraining the propagation process and node behaviors on this route, we could weaken the noise propagation and improve model robustness. Also, we find that critical attacking neurons are useful to evaluate sample adversarial hardness that images with higher stimulus are easier to be perturbed into adversarial examples. (C) 2020 The Author(s). Published by Elsevier Inc.

Keywords

Critical attacking route, Adversarial robustness, Model interpretation

Discipline

Information Security | Software Engineering

Research Areas

Software and Cyber-Physical Systems

Publication

Information Sciences

Volume

547

First Page

568

Last Page

578

ISSN

0020-0255

Identifier

10.1016/j.ins.2020.08.043

Publisher

Elsevier

Additional URL

https://doi.org/10.1016/j.ins.2020.08.043

Share

COinS