计算机科学
人工智能
人类多任务处理
多目标优化
最优化问题
学习迁移
进化计算
机器学习
反向传播
人工神经网络
数学优化
算法
数学
心理学
认知心理学
作者
Songbai Liu,Qiuzhen Lin,Liang Feng,Ka‐Chun Wong,Kay Chen Tan
标识
DOI:10.1109/tevc.2022.3166482
摘要
Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO to solve large-scale multiobjective optimization problems (LMOPs). To fill this research gap, this article proposes a new multitasking ETO algorithm via a powerful transfer learning model to simultaneously solve multiple LMOPs. In particular, inspired by adversarial domain adaptation in transfer learning, a discriminative reconstruction network (DRN) model (containing an encoder, a decoder, and a classifier) is created for each LMOP. At each generation, the DRN is trained by the currently obtained nondominated solutions for all LMOPs via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source LMOPs directly to the target LMOP for assisting its optimization, can evaluate the correlation between the source and target LMOPs to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target LMOP to improve the efficiency of transfer optimization in the large-scale search space. Moreover, we propose a real-world multitasking LMOP suite to simulate the training of deep neural networks (DNNs) on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated in this real-world problem suite and the other two synthetic problem suites.
科研通智能强力驱动
Strongly Powered by AbleSci AI