Publication Type

Journal Article

Version

acceptedVersion

Publication Date

6-2023

Abstract

Graphs can model complicated interactions between entities, which naturally emerge in many important applications. These applications can often be cast into standard graph learning tasks, in which a crucial step is to learn low-dimensional graph representations. Graph neural networks (GNNs) are currently the most popular model in graph embedding approaches. However, standard GNNs in the neighborhood aggregation paradigm suffer from limited discriminative power in distinguishing high-order graph structures as opposed to low-order structures. To capture high-order structures, researchers have resorted to motifs and developed motif-based GNNs. However, the existing motif-based GNNs still often suffer from less discriminative power on high-order structures. To overcome the above limitations, we propose motif GNN (MGNN), a novel framework to better capture high-order structures, hinging on our proposed motif redundancy minimization operator and injective motif combination. First, MGNN produces a set of node representations with respect to each motif. The next phase is our proposed redundancy minimization among motifs which compares the motifs with each other and distills the features unique to each motif. Finally, MGNN performs the updating of node representations by combining multiple representations from different motifs. In particular, to enhance the discriminative power, MGNN uses an injective function to combine the representations with respect to different motifs. We further show that our proposed architecture increases the expressive power of GNNs with a theoretical analysis. We demonstrate that MGNN outperforms state-of-the-art methods on seven public benchmarks on both the node classification and graph classification tasks.

Keywords

Graph neural network (GNN), graph representation, high-order structure, motif

Discipline

Databases and Information Systems | Numerical Analysis and Scientific Computing

Research Areas

Data Science and Engineering

Publication

IEEE Transactions on Neural Networks and Learning Systems

First Page

1

Last Page

15

ISSN

2162-237X

Identifier

10.1109/TNNLS.2023.3281716

Publisher

Institute of Electrical and Electronics Engineers

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/TNNLS.2023.3281716

Share

COinS