Publication Type
Conference Proceeding Article
Version
acceptedVersion
Publication Date
4-2025
Abstract
Dynamic graphs capture evolving interactions between entities, such as in social networks, online learning platforms, and crowdsourcing projects. For dynamic graph modeling, dynamic graph neural networks (DGNNs) have emerged as a mainstream technique. However, they are generally pre-trained on the link prediction task, leaving a significant gap from the objectives of downstream tasks such as node classification. To bridge the gap, prompt-based learning has gained traction on graphs, but most existing efforts focus on static graphs and neglect the evolution of dynamic graphs. In this paper, we propose DYGPROMPT, a novel pre-training and prompt learning framework for dynamic graph modeling. First, we design dual prompts to address the discrepancy in both task objectives and temporal variations across pre-training and downstream tasks. Second, we recognize that node and time patterns often characterize each other, and propose dual condition-nets to model the evolving node-time patterns in downstream tasks. Finally, we thoroughly evaluate and analyze DYGPROMPT through extensive experiments on four public datasets.
Discipline
Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
Proceedings of the Thirteenth International Conference on Learning Representations, Singapore, April 24-28
First Page
1
Last Page
20
Identifier
10.48550/arXiv.2405.13937
Publisher
ICLR
City or Country
Singapore
Citation
YU, Xingtong; LIU, Zhenghao; ZHANG, Xinming; and FANG, Yuan.
Node-time conditional prompt learning in dynamic graphs. (2025). Proceedings of the Thirteenth International Conference on Learning Representations, Singapore, April 24-28. 1-20.
Available at: https://ink.library.smu.edu.sg/sis_research/10693
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.48550/arXiv.2405.13937