元学习(计算机科学)
计算机科学
梯度下降
数学优化
趋同(经济学)
最优化问题
人工智能
领域(数学分析)
任务(项目管理)
机器学习
人工神经网络
算法
数学
管理
经济
数学分析
经济增长
作者
Feiyang Ye,Baijiong Lin,Zhixiong Yue,Yu Zhang,Ivor W. Tsang
标识
DOI:10.1016/j.artint.2024.104184
摘要
Meta learning with multiple objectives can be formulated as a Multi-Objective Bi-Level optimization Problem (MOBLP) where the upper-level subproblem is to solve several possible conflicting targets for the meta learner. However, existing studies either apply an inefficient evolutionary algorithm or linearly combine multiple objectives as a single-objective problem with the need to tune combination weights. In this paper, we propose a unified gradient-based Multi-Objective Meta Learning (MOML) framework and devise the first gradient-based optimization algorithm to solve the MOBLP by alternatively solving the lower-level and upper-level subproblems via the gradient descent method and the gradient-based multi-objective optimization method, respectively. Theoretically, we prove the convergence properties of the proposed gradient-based optimization algorithm. Empirically, we show the effectiveness of the proposed MOML framework in several meta learning problems, including few-shot learning, neural architecture search, domain adaptation, and multi-task learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI