计算机科学
初始化
元学习(计算机科学)
人工智能
理论(学习稳定性)
一般化
机器学习
卷积神经网络
梯度下降
人工神经网络
语音识别
数学
任务(项目管理)
数学分析
管理
经济
程序设计语言
作者
Qiulin Wang,Wenxuan Hu,Lin Li,Qingyang Hong
标识
DOI:10.1109/icassp49357.2023.10094936
摘要
Model Agnostic Meta-Learning (MAML) is an effective meta-learning algorithm for low-resource automatic speech recognition (ASR). It uses gradient descent to learn the initialization parameters of the model through various languages, making the model quickly adapt to unseen low-resource languages. But MAML is unstable due to its unique bilevel loss backward structure, which significantly affects the stability and generalization of the model. Since various languages have different contributions to the target language, the loss weights corresponding to the effects of diverse languages require costly manual adjustment in the training stage. Proper selection of these weights will influence the performance of the entire model. In this paper, we propose to apply a loss weight adaption method to MAML using Convolutional Neural Network (CNN) with Homoscedastic Uncertainty. The results of experiments showed that the proposed method outperformed previous gradient-based meta-learning methods and other loss weights adaption methods, and it further improved the stability and effectiveness of MAML.
科研通智能强力驱动
Strongly Powered by AbleSci AI