Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
5-2023
Abstract
Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily relies on a large amount of task-specific supervision. To reduce labeling requirement, the "pre-train, fine-tune"and "pre-train, prompt"paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt.
Keywords
Graph neural networks, pre-training, prompt, few-shot learning
Discipline
Information Security
Research Areas
Data Science and Engineering
Publication
Proceedings of the 2023 ACM Web Conference, Austin, USA, April 30-May 4
First Page
417
Last Page
428
Identifier
10.1145/3543507.3583386
Publisher
ACM
City or Country
New York
Citation
LIU, Zemin; YU, Xingtong; FANG, Yuan; and ZHANG, Xinming.
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. (2023). Proceedings of the 2023 ACM Web Conference, Austin, USA, April 30-May 4. 417-428.
Available at: https://ink.library.smu.edu.sg/sis_research/8191
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://doi.org/10.1145/3543507.3583386