A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
Publication Type
Journal Article
Publication Date
9-2023
Abstract
The graph neural networks (GNNs) are susceptible to adversarial perturbations and distribution biases, which pose potential security concerns for real-world applications. Current endeavors mainly focus on graph matching, while the subtle relationships between the nodes and structures of graph-structured data remain under-explored. Accordingly, two fundamental challenges arise as follows: 1) the intricate connections among nodes may induce the distribution shift of graph samples even under the same scenario, and 2) the perturbations of inherent graph-structured representations can introduce spurious shortcuts, which lead to GNN models relying on biased data to make unstable predictions. To address these problems, we propose a novel causality-aligned structure rationalization (CASR) scheme to construct invariant rationales by probing the coherent and causal patterns, which facilitates GNN models to make stable and reliable predictions in case of adversarial biased perturbations. Specifically, the initial graph samples across domains are leveraged to boost the diversity of datasets and perceive the interaction between shortcuts. Subsequently, the causal invariant rationales can be obtained during the interventions. This allows the GNN model to extrapolate risk variations from a single observed environment to multiple unknown environments. Moreover, the query feedback mechanism can progressively promote the consistency-driven optimal rationalization by reinforcing real essences and eliminating spurious shortcuts. Extensive experiments demonstrate the effectiveness of our scheme against adversarial biased perturbations from data manipulation attacks and out-of-distribution (OOD) shifts on various graph-structured datasets. Notably, we reveal that the capture of distinctive rationales can greatly reduce the dependence on shortcut cues and improve the robustness of OOD generalization.
Keywords
Perturbation methods, Predictive models, Reliability, Robustness, Graph neural networks, Data models, Correlation, Adversarial biased perturbations, spurious correlations, invariant causal rationales, OOD generalization
Discipline
Information Security
Research Areas
Cybersecurity
Publication
IEEE Transactions on Information Forensics and Security
Volume
19
First Page
59
Last Page
73
ISSN
1556-6013
Identifier
10.1109/TIFS.2023.3318936
Publisher
Institute of Electrical and Electronics Engineers
Citation
JIA, Ju; MA, Siqi; LIU, Yang; WANG, Lina; and DENG, Robert H..
A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks. (2023). IEEE Transactions on Information Forensics and Security. 19, 59-73.
Available at: https://ink.library.smu.edu.sg/sis_research/8501
Copyright Owner and License
Authors
Additional URL
https://doi.org/10.1109/TIFS.2023.3318936