Publication Type
Journal Article
Version
publishedVersion
Publication Date
6-2024
Abstract
Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily relies on a large amount of task-specific supervision. To reduce labeling requirement, the 'pre-train, fine-tune' and 'pre-train, prompt' paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. In particular, GraphPrompt adopts simple yet effective designs in both pre-training and prompt tuning: During pre-training, a link prediction-based task is used to materialize the task template; during prompt tuning, a learnable prompt vector is applied to the ReadOut layer of the graph encoder. To further enhance GraphPrompt in these two stages, we extend it into GraphPrompt+ with two major enhancements. First, we generalize a few popular graph pre-training tasks beyond simple link prediction to broaden the compatibility with our task template. Second, we propose a more generalized prompt design that incorporates a series of prompt vectors within every layer of the pre-trained graph encoder, in order to capitalize on the hierarchical information across different layers beyond just the readout layer. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt and GraphPrompt+.
Keywords
Few-shot learning, Fine tuning, Graph mining, Graph neural networks, Metalearning, Pre-training, Prompting, Representation learning, Task analysis, Tuning
Discipline
Databases and Information Systems | Graphics and Human Computer Interfaces
Publication
IEEE Transactions on Knowledge and Data Engineering
Volume
36
Issue
11
First Page
6237
Last Page
6250
ISSN
1041-4347
Identifier
10.1109/TKDE.2024.3419109
Publisher
Institute of Electrical and Electronics Engineers
Citation
YU, Xingtong; LIU, Zhenghao; FANG, Yuan; and et al..
Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs. (2024). IEEE Transactions on Knowledge and Data Engineering. 36, (11), 6237-6250.
Available at: https://ink.library.smu.edu.sg/sis_research/9703
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/TKDE.2024.3419109
Included in
Databases and Information Systems Commons, Graphics and Human Computer Interfaces Commons