计算机科学
解码方法
采样(信号处理)
翻译(生物学)
贝叶斯定理
质量(理念)
机器翻译
人工智能
机器学习
自然语言处理
贝叶斯概率
算法
认识论
哲学
滤波器(信号处理)
信使核糖核酸
化学
基因
生物化学
计算机视觉
作者
António Farinhas,Pascal Denis,André F. T. Martins
标识
DOI:10.18653/v1/2023.emnlp-main.733
摘要
Large language models (LLMs) are becoming a one-fits-many solution, but they sometimes hallucinate or produce unreliable output. In this paper, we investigate how hypothesis ensembling can improve the quality of the generated text for the specific problem of LLM-based machine translation. We experiment with several techniques for ensembling hypotheses produced by LLMs such as ChatGPT, LLaMA, and Alpaca. We provide a comprehensive study along multiple dimensions, including the method to generate hypotheses (multiple prompts, temperature-based sampling, and beam search) and the strategy to produce the final translation (instruction-based, quality-based reranking, and minimum Bayes risk (MBR) decoding). Our results show that MBR decoding is a very effective method, that translation quality can be improved using a small number of samples, and that instruction tuning has a strong impact on the relation between the diversity of the hypotheses and the sampling temperature.
科研通智能强力驱动
Strongly Powered by AbleSci AI