计算机科学
嵌入
人工智能
图形
机器学习
理论计算机科学
作者
Jingchao Wang,Weimin Li,Fangfang Liu,Zhenhai Wang,Alex Munyole Luvembe,Qun Jin,Quan-Ke Pan,Fangyu Liu
标识
DOI:10.1016/j.eswa.2023.123116
摘要
Knowledge graph completion (KGC) aims at completing missing information in knowledge graphs (KGs). Most previous works work well in the transductive setting, but are not applicable in the inductive setting, i.e., test entities can be unseen during training. Recently proposed methods obtain inductive ability by learning logic rules from subgraphs. However, all these works only consider the structural information of subgraphs while ignoring the rich contextual semantic information underlying KGs, which tends to lead to a sub-optimal embedding result. Furthermore, they tend to perform poorly when the subgraphs are sparse. To address these problems, we propose a global and local Context-enhanced Embedding network, ConeE, which can fully utilize local and global contextual information to enhance embedding representations through the following two components. (1) The global context modeling module (GCMM) is a semi-parametric coarse-grained global semantic extractor, which can effectively extract global context-based semantic information via a BERT-based context encoder and a semantic fusion network (SFN), and adopts a novel contrastive learning-based sampling strategy to optimize semantic features. Furthermore, a scoring network is designed to evaluate the confidence of triplets from the perspective of both the triplet facts and the reasoning path to improve the accuracy of prediction. (2) The local context modeling module (LCMM) employs an interactive graph neural network (IGNN) to extract local topological features from subgraphs, and applies mutual information maximization (MIM) to subgraph modeling to capture more local features. Experiments on benchmark datasets show that ConeE significantly outperforms existing state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI