渡线
趋同(经济学)
计算机科学
数学优化
人口
突变
进化算法
局部最优
梯度下降
凸性
算法
数学
人工智能
人工神经网络
生物化学
化学
人口学
社会学
金融经济学
经济
基因
经济增长
作者
Zhaobo Liu,Li Guo,Haili Zhang,Zhengping Liang,Zexuan Zhu
标识
DOI:10.1109/tcyb.2023.3270904
摘要
The multifactorial evolutionary algorithm (MFEA) is one of the most widely used evolutionary multitasking (EMT) algorithms. The MFEA implements knowledge transfer among optimization tasks via crossover and mutation operators and it obtains high-quality solutions more efficiently than single-task evolutionary algorithms. Despite the effectiveness of MFEA in solving difficult optimization problems, there is no evidence of population convergence or theoretical explanations of how knowledge transfer increases algorithm performance. To fill this gap, we propose a new MFEA based on diffusion gradient descent (DGD), namely, MFEA-DGD in this article. We prove the convergence of DGD for multiple similar tasks and demonstrate that the local convexity of some tasks can help other tasks escape from local optima via knowledge transfer. Based on this theoretical foundation, we design complementary crossover and mutation operators for the proposed MFEA-DGD. As a result, the evolution population is endowed with a dynamic equation that is similar to DGD, that is, convergence is guaranteed, and the benefit from knowledge transfer is explainable. In addition, a hyper-rectangular search strategy is introduced to allow MFEA-DGD to explore more underdeveloped areas in the unified express space of all tasks and the subspace of each task. The proposed MFEA-DGD is verified experimentally on various multitask optimization problems, and the results demonstrate that MFEA-DGD can converge faster to competitive results compared to state-of-the-art EMT algorithms. We also show the possibility of interpreting the experimental results based on the convexity of different tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI