已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Spoken Language Intelligence of Large Language Models for Language Learning

口语 计算机科学 领域 语言习得 音韵学 心理学 人工智能 语言学 数学教育 哲学 政治学 法学
作者
Linkai Peng,Baorian Nuchged,Yingming Gao
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2308.14536
摘要

People have long hoped for a conversational system that can assist in real-life situations, and recent progress on large language models (LLMs) is bringing this idea closer to reality. While LLMs are often impressive in performance, their efficacy in real-world scenarios that demand expert knowledge remains unclear. LLMs are believed to hold the most potential and value in education, especially in the development of Artificial intelligence (AI) based virtual teachers capable of facilitating language learning. Our focus is centered on evaluating the efficacy of LLMs in the realm of education, specifically in the areas of spoken language learning which encompass phonetics, phonology, and second language acquisition. We introduce a new multiple-choice question dataset to evaluate the effectiveness of LLMs in the aforementioned scenarios, including understanding and application of spoken language knowledge. In addition, we investigate the influence of various prompting techniques such as zero- and few-shot method (prepending the question with question-answer exemplars), chain-of-thought (CoT, think step-by-step), in-domain exampler and external tools (Google, Wikipedia). We conducted large-scale evaluation on popular LLMs (20 distinct models) using these methods. We achieved significant performance improvements compared to the zero-shot baseline in the practical questions reasoning (GPT-3.5, 49.1% -> 63.1%; LLaMA2-70B-Chat, 42.2% -> 48.6%). We found that models of different sizes have good understanding of concepts in phonetics, phonology, and second language acquisition, but show limitations in reasoning for real-world problems. Additionally, we also explore preliminary findings on conversational communication.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
guilin完成签到,获得积分10
2秒前
3秒前
可爱的函函应助哈哈哈哈采纳,获得10
5秒前
Ziming发布了新的文献求助10
6秒前
Natsume发布了新的文献求助10
7秒前
啵啵啵小太阳完成签到,获得积分10
9秒前
悦耳的小夏完成签到,获得积分20
9秒前
快乐抽屉发布了新的文献求助10
11秒前
威武鸵鸟完成签到,获得积分20
11秒前
14秒前
16秒前
天天快乐应助炒栗子采纳,获得10
17秒前
轻微完成签到 ,获得积分10
19秒前
重要问芙brk完成签到,获得积分10
22秒前
健忘半邪发布了新的文献求助10
22秒前
完美世界应助zhouleiwang采纳,获得10
23秒前
冷面完成签到,获得积分10
24秒前
斯文败类应助zoe采纳,获得10
24秒前
paipai完成签到 ,获得积分10
25秒前
李健应助科研通管家采纳,获得10
27秒前
27秒前
科研通AI2S应助科研通管家采纳,获得10
27秒前
27秒前
qweqwe完成签到,获得积分10
27秒前
Juliet完成签到,获得积分20
28秒前
29秒前
123456发布了新的文献求助10
31秒前
31秒前
TTT0530发布了新的文献求助10
33秒前
35秒前
37秒前
哈哈哈哈发布了新的文献求助10
39秒前
TTT0530完成签到,获得积分10
40秒前
悄悄是心上的肖肖完成签到 ,获得积分10
42秒前
丰富的小懒虫完成签到,获得积分10
44秒前
48秒前
fenmar发布了新的文献求助10
51秒前
doug完成签到,获得积分0
58秒前
58秒前
高分求助中
Kinetics of the Esterification Between 2-[(4-hydroxybutoxy)carbonyl] Benzoic Acid with 1,4-Butanediol: Tetrabutyl Orthotitanate as Catalyst 1000
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
Rechtsphilosophie 1000
Handbook of Qualitative Cross-Cultural Research Methods 600
Chen Hansheng: China’s Last Romantic Revolutionary 500
Mantiden: Faszinierende Lauerjäger Faszinierende Lauerjäger 500
PraxisRatgeber: Mantiden: Faszinierende Lauerjäger 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3139341
求助须知:如何正确求助?哪些是违规求助? 2790257
关于积分的说明 7794680
捐赠科研通 2446703
什么是DOI,文献DOI怎么找? 1301325
科研通“疑难数据库(出版商)”最低求助积分说明 626124
版权声明 601109