Publication Type

Journal Article

Version

acceptedVersion

Publication Date

1-2024

Abstract

Hard negative mining has shown effective in enhancing self-supervised contrastive learning (CL) on diverse data types, including graph CL (GCL). The existing hardness-aware CL methods typically treat negative instances that are most similar to the anchor instance as hard negatives, which helps improve the CL performance, especially on image data. However, this approach often fails to identify the hard negatives but leads to many false negatives on graph data. This is mainly due to that the learned graph representations are not sufficiently discriminative due to oversmooth representations and/or non-independent and identically distributed (non-i.i.d.) issues in graph data. To tackle this problem, this article proposes a novel approach that builds a discriminative model on information (i.e., two sets of pairwise affinities between the negative instances and the anchor instance) to mine hard negatives in GCL. In particular, the proposed approach evaluates how confident/uncertain the discriminative model is about the affinity of each negative instance to an anchor instance to determine its hardness weight relative to the anchor instance. This uncertainty information is then incorporated into the existing GCL loss functions via a weighting term to enhance their performance. The enhanced GCL is theoretically grounded that the resulting GCL loss is equivalent to a triplet loss with an margin being exponentially proportional to the learned uncertainty of each negative instance. Extensive experiments on ten graph datasets show that our approach does the following: 1) consistently enhances different state-of-the-art (SOTA) GCL methods in both graph and node classification tasks and 2) significantly improves their robustness against adversarial attacks. Code is available at https://github.com/mala-lab/AUGCL.

Keywords

Affinity learning, Estimation, graph contrastive learning (GCL), hard negative mining, Loss measurement, Measurement uncertainty, Representation learning, Task analysis, Training, Uncertainty, uncertainty estimation

Discipline

Artificial Intelligence and Robotics | OS and Networks

Research Areas

Intelligent Systems and Optimization

Publication

IEEE Transactions on Neural Networks and Learning Systems

First Page

1

Last Page

11

ISSN

2162-237X

Identifier

10.1109/TNNLS.2023.3339770

Publisher

Institute of Electrical and Electronics Engineers

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1109/TNNLS.2023.3339770

Share

COinS