进化算法
计算机科学
维数之咒
梯度下降
多目标优化
数学优化
分类
人工神经网络
进化计算
人工智能
趋同(经济学)
反向传播
感知器
人口
维数(图论)
机器学习
数学
算法
人口学
社会学
经济增长
经济
纯数学
作者
Songbai Liu,Jun Li,Qiuzhen Lin,Ye Tian,Kay Chen Tan
标识
DOI:10.1109/tevc.2022.3155593
摘要
Most existing evolutionary search strategies are not so efficient when directly handling the decision space of large-scale multiobjective optimization problems (LMOPs). To enhance the efficiency of tackling LMOPs, this article proposes an accelerated evolutionary search (AES) strategy. Its main idea is to learn a gradient-descent-like direction vector (GDV) for each solution via the specially trained feedforward neural network, which may be the learnt possibly fastest convergent direction to reproduce new solutions efficiently. To be specific, a multilayer perceptron (MLP) with only one hidden layer is constructed, in which the number of neurons in the input and output layers is equal to the dimension of the decision space. Then, to get appropriate training data for the model, the current population is divided into two subsets based on the nondominated sorting, and each poor solution in one subset with worse convergence will be paired to an elitist solution in another subset with the minimum angle to it, which is considered most likely to guide it with rapid convergence. Next, this MLP is updated via backpropagation with gradient descent by using the above elaborately prepared dataset. Finally, an accelerated large-scale multiobjective evolutionary algorithm (ALMOEA) is designed by using AES as a reproduction operator. Experimental studies validate the effectiveness of the proposed AES when handling the search space of LMOPs with dimensionality ranging from 1000 to 10000. When compared with six state-of-the-art evolutionary algorithms, the experimental results also show the better efficiency and performance of the proposed optimizer in solving various LMOPs.
科研通智能强力驱动
Strongly Powered by AbleSci AI