计算机科学
可扩展性
嵌入
降维
知识图
图形
维数之咒
变压器
编码器
人工智能
数据挖掘
理论计算机科学
机器学习
数据库
操作系统
物理
电压
量子力学
作者
Peyman Baghershahi,Reshad Hosseini,Hadi Moradi
标识
DOI:10.1016/j.knosys.2022.110124
摘要
A few models have tried to tackle the link prediction problem, also known as knowledge graph completion, by embedding knowledge graphs in comparably lower dimensions. However, the state-of-the-art results are attained at the cost of considerably increasing the dimensionality of embeddings which causes scalability issues in the case of huge knowledge bases. Transformers have been successfully used recently as powerful encoders for knowledge graphs, but available models still have scalability issues. To address this limitation, we introduce a Transformer-based model to gain expressive low-dimensional embeddings. We utilize a large number of self-attention heads as the key to applying query-dependent projections to capture mutual information between entities and relations. Empirical results on WN18RR and FB15k-237 as standard link prediction benchmarks demonstrate that our model has favorably comparable performance with the current state-of-the-art models. Notably, we yield our promising results with a significant reduction of 66.9% in the dimensionality of embeddings compared to the five best recent state-of-the-art competitors on average.
科研通智能强力驱动
Strongly Powered by AbleSci AI