计算机科学
情态动词
嵌入
光学(聚焦)
相似性(几何)
模式
过程(计算)
图形
人工智能
数据挖掘
模式识别(心理学)
理论计算机科学
机器学习
图像(数学)
社会科学
化学
物理
社会学
高分子化学
光学
操作系统
作者
Bo Cheng,Jia Zhu,Minzhe Guo
标识
DOI:10.1016/j.neucom.2022.05.058
摘要
Entity Alignment (EA) is a crucial task in knowledge fusion, which aims to link entities with the same real-world identity from different Knowledge Graphs (KGs). Existing methods have achieved satisfactory performance, however, they mainly focus on single modal KG, which is difficult to be effectively applied to multi-modal scenes. In this paper, we propose a Multi-modal Joint entity Alignment Framework (MultiJAF), which can effectively utilize the knowledge of various modalities. Concretely, we first learn the embeddings of different modalities, i.e., structure, attribute and image modalities. Next, we adopt an attention-based multi-modal fusion network to integrate these embeddings and use obtained joint embeddings to compute a joint embedding-based similarity matrix SJ. Moreover, we design a Numerical Process Module (NPM) to infer a similarity matrix SN according to the numerical information of entities. In the end, we utilize a simple late fusion method to ensemble two similarity matrices for the final alignment. In addition, to reduce the cost of labeling data, we propose a novel NPM-based unsupervised multi-modal EA method. Experimental results on two real-world datasets demonstrate the effectiveness of our proposed MultiJAF.
科研通智能强力驱动
Strongly Powered by AbleSci AI