作者
Hugo Touvron,Louis Martin,Kevin H. Stone,Peter J. Albert,Amjad Almahairi,Yasmine Babaei,Nikolay Bashlykov,Sanjay Batra,Prajjwal Bhargava,Shruti Bhosale,Dan Bikel,Lee Blecher,Cristian Canton Ferrer,Moya Chen,Guillem Cucurull,David Esiobu,Jude Fernandes,Jing Fu,Wentao Fu,Brian Fuller,Cynthia Gao,Vedanuj Goswami,Naman Goyal,Anthony Hartshorn,Saghar Hosseini,Rui Hou,Hakan Inan,Marcin Kardas,Viktor Kerkez,Madian Khabsa,Isabel M. Kloumann,A. S. Korenev,Punit Singh Koura,Marie-Anne Lachaux,Thibaut Lavril,J. Lee,Diana Liskovich,Yinghai Lu,Yuning Mao,Xavier Martinet,Todor Mihaylov,Pushkar Mishra,Igor Molybog,Yong Nie,A.M. Poulton,Jeremy Reizenstein,Rashi Rungta,Kalyan Saladi,Alan Schelten,Ruan Silva,Eric M. Smith,Ravi Subramanian,Xiang‐Yang Tan,Binh Tang,R. M. Taylor,Adina Williams,Jian Xiang Kuan,Puxin Xu,Zheng Yan,Iliyan Zarov,Yuchen Zhang,Angela Fan,Melanie Kambadur,Sharan Narang,Aurelien Rodriguez,Robert Stojnic,Sergey Edunov,Thomas Scialom
摘要
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.