零(语言学)
弹丸
命名实体识别
自然语言处理
计算机科学
语言学
人工智能
哲学
工程类
任务(项目管理)
化学
有机化学
系统工程
作者
Wujun Shao,Yaohua Hu,Pengli Ji,Xiaoran Yan,Dongwei Fan,Rui Zhang
出处
期刊:Cornell University - arXiv
日期:2023-01-01
标识
DOI:10.48550/arxiv.2310.17892
摘要
Astronomical knowledge entities, such as celestial object identifiers, are crucial for literature retrieval and knowledge graph construction, and other research and applications in the field of astronomy. Traditional methods of extracting knowledge entities from texts face challenges like high manual effort, poor generalization, and costly maintenance. Consequently, there is a pressing need for improved methods to efficiently extract them. This study explores the potential of pre-trained Large Language Models (LLMs) to perform astronomical knowledge entity extraction (KEE) task from astrophysical journal articles using prompts. We propose a prompting strategy called Prompt-KEE, which includes five prompt elements, and design eight combination prompts based on them. Celestial object identifier and telescope name, two most typical astronomical knowledge entities, are selected to be experimental object. And we introduce four currently representative LLMs, namely Llama-2-70B, GPT-3.5, GPT-4, and Claude 2. To accommodate their token limitations, we construct two datasets: the full texts and paragraph collections of 30 articles. Leveraging the eight prompts, we test on full texts with GPT-4 and Claude 2, on paragraph collections with all LLMs. The experimental results demonstrated that pre-trained LLMs have the significant potential to perform KEE tasks in astrophysics journal articles, but there are differences in their performance. Furthermore, we analyze some important factors that influence the performance of LLMs in entity extraction and provide insights for future KEE tasks in astrophysical articles using LLMs.
科研通智能强力驱动
Strongly Powered by AbleSci AI