差异进化
人工神经网络
人口
强化学习
计算机科学
突变
航程(航空)
跳跃
人工智能
进化算法
数学优化
算法
机器学习
数学
工程类
社会学
航空航天工程
人口学
物理
化学
基因
量子力学
生物化学
作者
Fuqing Zhao,Hao Zhou,Tianpeng Xu,. Jonrinaldi
标识
DOI:10.1016/j.eswa.2023.122674
摘要
The differential evolution (DE) algorithm is widely regarded as one of the most influential evolutionary algorithms for addressing complex optimization problems. However, the fixed mutation strategy limits the adaptive ability of DE, and the lack of utilization of historical information limits the optimization ability of DE. In this paper, an indicator-based self-learning differential evolution algorithm (ISDE) is proposed. A jump out mechanism based on deep reinforcement learning is adopted to control the mutation intensity of the population. The neural network in the jump out mechanism is designed as a decision maker. The mutation intensity of the population is controlled by the neural network, and the neural network are trained by a double deep Q network algorithm based on the continuous data generated during the evolution process. A population range indicator (PRI) is utilized to describe individual differences in the population. A diversity maintenance mechanism is designed to maintain individual differences according to the value of PRI. The experimental results reveal that the comprehensive performance of ISDE is superior to comparison algorithms on CEC 2017 real-parameter numerical optimization.
科研通智能强力驱动
Strongly Powered by AbleSci AI