人类多任务处理
趋同(经济学)
计算机科学
任务(项目管理)
梯度下降
进化算法
数学优化
人工智能
数学
心理学
经济增长
人工神经网络
经济
认知心理学
管理
作者
Lu Bai,Wu Lin,Abhishek Gupta,Yew-Soon Ong
出处
期刊:IEEE transactions on cybernetics
[Institute of Electrical and Electronics Engineers]
日期:2021-03-11
卷期号:52 (8): 8561-8573
被引量:22
标识
DOI:10.1109/tcyb.2021.3052509
摘要
Evolutionary multitasking, which solves multiple optimization tasks simultaneously, has gained increasing research attention in recent years. By utilizing the useful information from related tasks while solving the tasks concurrently, improved performance has been shown in various problems. Despite the success enjoyed by the existing evolutionary multitasking algorithms, still there is a lack of theoretical studies guaranteeing faster convergence compared to the conventional single task case. To analyze the effects of transferred information from related tasks, in this article, we first put forward a novel multitask gradient descent (MTGD) algorithm, which enhances the standard gradient descent updates with a multitask interaction term. The convergence of the resulting MTGD is derived. Furthermore, we present the first proof of faster convergence of MTGD relative to its single task counterpart. Utilizing MTGD, we formulate a gradient-free evolutionary multitasking algorithm called multitask evolution strategies (MTESs). Importantly, the single task evolution strategies (ESs) we utilize are shown to asymptotically approximate gradient descent and, hence, the faster convergence results derived for MTGD extend to the case of MTES as well. Numerical experiments comparing MTES with single task ES on synthetic benchmarks and practical optimization examples serve to substantiate our theoretical claim.
科研通智能强力驱动
Strongly Powered by AbleSci AI