计算机科学
本体论
自然语言处理
光环
人工智能
认识论
哲学
物理
量子力学
银河系
作者
Navapat Nananukul,Mayank Kejriwal
摘要
Recent progress in generative AI, including Large Language Models (LLMs) like ChatGPT, has opened up significant opportunities in fields ranging from natural language processing to knowledge discovery and data mining. However, there is also a growing awareness that the models can be prone to problems such as making information up or 'hallucinations', and faulty reasoning on seemingly simple problems. Because of the popularity of models like ChatGPT, both academic scholars and citizen scientists have documented hallucinations of several different types and severity. Despite this body of work, a formal model for describing and representing these hallucinations (with relevant meta-data) at a fine-grained level, is still lacking. In this paper, we address this gap by presenting the Hallucination Ontology or HALO, a formal, extensible ontology written in OWL that currently offers support for six different types of hallucinations known to arise in LLMs, along with support for provenance and experimental metadata. We also collect and publish a dataset containing hallucinations that we inductively gathered across multiple independent Web sources and show that HALO can be successfully used to model this dataset and answer competency questions.
科研通智能强力驱动
Strongly Powered by AbleSci AI