计算机科学
可解释性
杠杆(统计)
推论
知识图
人工智能
强化学习
聚类分析
机器学习
图形
理论计算机科学
作者
Qingqing Wang,Jiao Han,Danpu Zhang,Xuemei Dong
标识
DOI:10.1109/icbase59196.2023.10303211
摘要
Knowledge reasoning methods play a pivotal role in various applications, including knowledge graph completion, knowledge-based question answering, and knowledge recommendation. Among these methods, path-based multi-hop reasoning techniques have the ability to leverage the rich graph information in knowledge graphs beyond triplets, but they still encounter certain challenges. Existing multi-hop knowledge reasoning methods heavily rely on data and lack interpretability. Additionally, the vast exploration space of paths, composed of numerous entities and relations in large knowledge graphs, often leads to irrelevant and redundant exploration. To address these challenges, this paper proposes a novel knowledge reasoning method named ALMARL (Attention-based LSTM and Multi-Agent Reinforcement Learning for Knowledge Graph Reasoning). It utilizes Attention-based LSTM in conjunction with multi-agent reinforcement learning. The method first employs clustering techniques to group entities. Based on the clustering results, multi-agents at different levels are established to selectively explore certain clusters or limit the search to specific clusters, effectively reducing the exploration of irrelevant entities and minimizing redundant exploration. Subsequently, the agents efficiently explore paths and effectively mine deep semantic information in entity relationships through the integration of Attention-based LSTM. Finally, the model produces inference results and extracts interpretable paths. We evaluate our proposed model on two types of tasks: link prediction and fact prediction. Experimental results demonstrate significant performance improvements compared to several competitive baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI