相关性反馈
计算机科学
相关性(法律)
语言模型
生成语法
集合(抽象数据类型)
人工智能
排名(信息检索)
概率逻辑
生成模型
情报检索
精确性和召回率
自然语言处理
学习排名
主题模型
召回
机器学习
图像检索
图像(数学)
语言学
哲学
政治学
法学
程序设计语言
作者
Iain Mackie,Shubham Chatterjee,Jeff Dalton
标识
DOI:10.1145/3539618.3591992
摘要
Current query expansion models use pseudo-relevance feedback to improve first-pass retrieval effectiveness; however, this fails when the initial results are not relevant. Instead of building a language model from retrieved results, we propose Generative Relevance Feedback (GRF) that builds probabilistic feedback models from long-form text generated from Large Language Models. We study the effective methods for generating text by varying the zero-shot generation subtasks: queries, entities, facts, news articles, documents, and essays. We evaluate GRF on document retrieval benchmarks covering a diverse set of queries and document collections, and the results show that GRF methods significantly outperform previous PRF methods. Specifically, we improve MAP between 5-19% and NDCG@10 17-24% compared to RM3 expansion, and achieve state-of-the-art recall across all datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI