自然(考古学)
弹丸
计算机科学
自然语言处理
人工智能
地理
材料科学
考古
冶金
作者
Milind Shah,Dweepna Garg,Ankita Kothari,Pinal Hansora,Apoorva Shah,Monali Parikh
出处
期刊:Lecture notes in networks and systems
日期:2024-01-01
卷期号:: 31-40
标识
DOI:10.1007/978-981-97-1260-1_4
摘要
Few-shot learning is an area within the domain of machine learning that focuses on the challenge of training models capable of effectively performing new tasks using only a limited number of labeled instances. This contrasts standard machine learning approaches, wherein models are trained using large datasets of labeled instances. The problem of few-shot learning represents a significant challenge, yet its significance keeps rising due to the expanding volume of labeled data accessible for training machine learning models. This limitation arises from the fact that in numerous practical situations, it is often impossible to collect a large dataset comprising labeled instances for each specific task that requires resolution. The typical procedure involves two distinct stages: pre-training and fine-tuning. During the pre-training phase, a language model undergoes training using an extensive collection of textual data, which may consist of web pages or books. This approach aims for the model to understand language patterns and subsequently encode this knowledge within its parameters. The subsequent stage in the process is known as fine-tuning, which involves further training the pre-trained model using a smaller labeled dataset specific to the desired task. This process allows the model to adapt to the target domain or classification problem. This paper reviews recent few-shot learning (FSL) advances for natural language processing (NLP). It defines the procedure of few-shot learning in NLP, explores its associated challenges, describes the problem of few-shot learning, and discusses relevant learning problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI