Publication Type

Journal Article

Version

acceptedVersion

Publication Date

2-2025

Abstract

Sparse Knowledge Graph (KG) scenarios pose a challenge for previous Knowledge Graph Completion (KGC) methods, that is, the completion performance decreases rapidly with the increase of graph sparsity. This problem is also exacerbated because of the widespread existence of sparse KGs in practical applications. To alleviate this challenge, we present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities to supplement insufficient structure features and distill logical reasoning knowledge for sparse KGC. The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller. The reasoning path distiller explores high-order graph structures such as reasoning paths and encodes them as rich-semantic edges, explicitly compositing long-range dependencies into the predictor. This step also plays an essential role in densifying KGs, effectively alleviating the sparse issue. Furthermore, the path distiller further distills logical reasoning knowledge from these mined reasoning paths into the predictor. These two components are jointly optimized using a well-designed variational EM algorithm. Extensive experiments and analyses on four sparse benchmarks demonstrate the effectiveness of our proposed method.

Keywords

graph neural networks, knowledge graph completion, reinforcement learning

Discipline

Theory and Algorithms

Research Areas

Intelligent Systems and Optimization

Publication

Frontiers of Computer Science

Volume

19

Issue

2

First Page

1

Last Page

12

ISSN

2095-2228

Identifier

10.1007/s11704-023-3521-y

Publisher

Springer

Copyright Owner and License

Authors CC-BY

Creative Commons License

Creative Commons Attribution 3.0 License
This work is licensed under a Creative Commons Attribution 3.0 License.

Share

COinS