A Survey of Large Language Models

语言模型 计算机科学 主流 比例(比率) 人工智能 缩放比例 数据科学 自然语言处理 政治学 数学 几何学 量子力学 物理 法学
作者
Wayne Xin Zhao,Kun Zhou,Junyi Li,Tianyi Tang,Xiaolei Wang,Yupeng Hou,Yingqian Min,Beichen Zhang,Junjie Zhang,Zican Dong,Yifan Du,Yang Chen,Yushuo Chen,Zhipeng Chen,Jinhao Jiang,Ruiyang Ren,Yifan Li,Xinyu Tang,Zikang Liu,Peiyu Liu
出处
期刊:Cornell University - arXiv 被引量:1325
标识
DOI:10.48550/arxiv.2303.18223
摘要

Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
Jessiehuang完成签到 ,获得积分10
2秒前
3秒前
晴天完成签到 ,获得积分10
3秒前
量子星尘发布了新的文献求助10
4秒前
4秒前
刘以宁发布了新的文献求助10
4秒前
Docgyj完成签到 ,获得积分0
4秒前
小杨小杨完成签到,获得积分10
5秒前
苗条盼山发布了新的文献求助10
5秒前
6秒前
呜呜呜发布了新的文献求助10
7秒前
呆萌幼晴完成签到,获得积分10
7秒前
tutu发布了新的文献求助10
7秒前
研自助完成签到,获得积分10
8秒前
不穷知识完成签到,获得积分10
9秒前
喜悦蚂蚁完成签到,获得积分10
9秒前
研友Bn完成签到 ,获得积分10
10秒前
minuxSCI完成签到,获得积分10
10秒前
天天快乐应助科研通管家采纳,获得10
10秒前
科研通AI6应助科研通管家采纳,获得10
11秒前
blackddl应助科研通管家采纳,获得10
11秒前
uuuu完成签到 ,获得积分10
11秒前
核动力驴应助科研通管家采纳,获得10
11秒前
11秒前
科研通AI6应助科研通管家采纳,获得10
11秒前
苗条盼山完成签到,获得积分10
11秒前
Frank应助科研通管家采纳,获得10
11秒前
酸色黑樱桃完成签到,获得积分10
11秒前
blackddl应助科研通管家采纳,获得10
11秒前
生动的天空完成签到,获得积分10
11秒前
amberzyc应助科研通管家采纳,获得10
11秒前
萧萧应助科研通管家采纳,获得10
12秒前
科研通AI6应助科研通管家采纳,获得10
12秒前
JamesPei应助科研通管家采纳,获得10
12秒前
科研通AI6应助科研通管家采纳,获得10
12秒前
blackddl应助科研通管家采纳,获得10
12秒前
科研通AI6应助科研通管家采纳,获得10
12秒前
jinggg完成签到 ,获得积分10
12秒前
科研通AI6应助科研通管家采纳,获得10
12秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Digitizing Enlightenment: Digital Humanities and the Transformation of Eighteenth-Century Studies 1000
Translanguaging in Action in English-Medium Classrooms: A Resource Book for Teachers 700
Real World Research, 5th Edition 680
Qualitative Data Analysis with NVivo By Jenine Beekhuyzen, Pat Bazeley · 2024 660
Handbook of Migration, International Relations and Security in Asia 555
Between high and low : a chronology of the early Hellenistic period 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5671659
求助须知:如何正确求助?哪些是违规求助? 4921045
关于积分的说明 15135488
捐赠科研通 4830525
什么是DOI,文献DOI怎么找? 2587125
邀请新用户注册赠送积分活动 1540733
关于科研通互助平台的介绍 1499131