A Survey of Large Language Models

语言模型 计算机科学 主流 比例(比率) 人工智能 缩放比例 数据科学 自然语言处理 政治学 数学 物理 几何学 量子力学 法学
作者
Wayne Xin Zhao,Kun Zhou,Junyi Li,Tianyi Tang,Xiaolei Wang,Yupeng Hou,Yingqian Min,Beichen Zhang,Junjie Zhang,Zican Dong,Yifan Du,Yang Chen,Yushuo Chen,Zhipeng Chen,Jinhao Jiang,Ruiyang Ren,Yifan Li,Xinyu Tang,Zikang Liu,Peiyu Liu,Jian‐Yun Nie,Ji-Rong Wen
出处
期刊:Cornell University - arXiv 被引量:654
标识
DOI:10.48550/arxiv.2303.18223
摘要

Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
直率千山发布了新的文献求助10
刚刚
苏夏完成签到 ,获得积分10
刚刚
yu发布了新的文献求助10
2秒前
3秒前
小白完成签到,获得积分10
3秒前
英姑应助PC采纳,获得10
6秒前
Sean发布了新的文献求助10
6秒前
wguanmc完成签到,获得积分10
8秒前
W查查发布了新的文献求助10
8秒前
俭朴的玉兰完成签到,获得积分10
8秒前
可爱的函函应助北彧采纳,获得10
11秒前
dgq_81完成签到,获得积分10
14秒前
sunzeyi完成签到,获得积分10
16秒前
yu完成签到,获得积分10
17秒前
孟祥勤完成签到,获得积分10
18秒前
铂铑钯钌完成签到,获得积分10
22秒前
cdm700完成签到,获得积分10
23秒前
MMP完成签到,获得积分10
24秒前
27秒前
烟花应助u深度采纳,获得10
28秒前
但是发布了新的文献求助20
30秒前
积极涛完成签到,获得积分10
31秒前
Leung发布了新的文献求助30
32秒前
潜放完成签到,获得积分10
32秒前
32秒前
34秒前
just完成签到,获得积分20
35秒前
北彧发布了新的文献求助10
35秒前
xiaoluuu完成签到 ,获得积分10
36秒前
积极涛发布了新的文献求助10
37秒前
38秒前
12345完成签到,获得积分10
38秒前
just发布了新的文献求助10
39秒前
Sean完成签到,获得积分10
39秒前
但是完成签到,获得积分10
40秒前
领导范儿应助肥逗采纳,获得10
41秒前
画晴完成签到,获得积分10
42秒前
42秒前
李lll发布了新的文献求助10
44秒前
科研通AI2S应助直率千山采纳,获得10
45秒前
高分求助中
The Oxford Handbook of Social Cognition (Second Edition, 2024) 1050
Kinetics of the Esterification Between 2-[(4-hydroxybutoxy)carbonyl] Benzoic Acid with 1,4-Butanediol: Tetrabutyl Orthotitanate as Catalyst 1000
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
юрские динозавры восточного забайкалья 800
English Wealden Fossils 700
Chen Hansheng: China’s Last Romantic Revolutionary 500
Mantiden: Faszinierende Lauerjäger Faszinierende Lauerjäger 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3140580
求助须知:如何正确求助?哪些是违规求助? 2791382
关于积分的说明 7798832
捐赠科研通 2447736
什么是DOI,文献DOI怎么找? 1302029
科研通“疑难数据库(出版商)”最低求助积分说明 626402
版权声明 601194