计算机科学
光学(聚焦)
自然语言处理
图形
零(语言学)
知识图
人工智能
语义记忆
语言学
理论计算机科学
心理学
物理
认知
哲学
神经科学
光学
作者
Rui Yang,Jiahao Zhu,Jianping Man,Fang Li,Yi Zhou
标识
DOI:10.1016/j.knosys.2024.112155
摘要
The design and development of text-based knowledge graph completion (KGC) methods leveraging textual entity descriptions are at the forefront of research. These methods involve advanced optimization techniques such as soft prompts and contrastive learning to enhance KGC models. The effectiveness of text-based methods largely hinges on the quality and richness of the training data. Large language models (LLMs) can utilize straightforward prompts to alter text data, thereby enabling data augmentation for KGC. Nevertheless, LLMs typically demand substantial computational resources. To address these issues, we introduce a framework termed constrained prompts for KGC (CP-KGC). This CP-KGC framework designs prompts that adapt to different datasets to enhance semantic richness. Additionally, CP-KGC employs a context constraint strategy to effectively identify polysemous entities within KGC datasets. Through extensive experimentation, we have verified the effectiveness of this framework. Even after quantization, the LLM (Qwen-7B-Chat-int4) still enhances the performance of text-based KGC methods.1 This study extends the performance limits of existing models and promotes further integration of KGC with LLMs.
科研通智能强力驱动
Strongly Powered by AbleSci AI