作者
Yun Liu,Ali Asghar Heidari,Zhennao Cai,Guoxi Liang,Huiling Chen,Zhifang Pan,Abdulmajeed Alsufyani,Sami Bourouis
摘要
The shuffled frog leaping algorithm is a new optimization algorithm proposed to solve the combinatorial optimization problem, which effectively combines the memetic algorithm based on a memetic algorithm and the particle swarm algorithm based on population behavior. The algorithm is widely used because it is easy to implement and requires few parameters to be adjusted. However, there are still some characteristics of this method that need to be improved because it is easy to fall into local optimization or poor search ability. To alleviate this limitation, a new version of the improved SFLA is proposed in this paper, which incorporates a dynamic step size adjustment strategy based on historical information, a specular reflection learning mechanism, and a simulated annealing mechanism based on chaotic mapping and levy flight. Firstly, the dynamic step size adjustment strategy based on historical information effectively helps to balance local exploration and global exploitation and alleviates the problem of falling into local optimum. Second, the specular reflection learning mechanism increases the possibility of searching for valid solutions in feasible domains and enhances the search ability of individuals in the population. Finally, an improved simulated annealing strategy is executed for each memetic, which improves the efficiency of local exploitation. In order to test the performance of the proposed algorithm, 31 test functions were selected from IEEE CEC2014 and 23 essential benchmark functions, and comparative experiments were carried out from the two dimensions of 30 and 100. Furthermore, a series of competing algorithms are selected, which involve nine classical standard algorithms, including PSO, BA, SSA, FA, SCA, WOA, GWO, MFO, and SFLA, as well as, six well-known improved algorithms, including LSFLA, DDSFLA, GOTLBO, ALCPSO, BLPSO, CLPSO. Furthermore, Wilcoxon signed-rank test and Friedman test are used as testing tools to illustrate the scalability of the proposed algorithm. From the analysis of the results, it can be seen that this proposed method has been effectively improved concerning stability and the quality of the optimal solution obtained from the search, both in low and high dimensions, and the ability to jump out of the local optimum has also been enhanced. In addition, to prove that this method has a reliable performance in discrete problems and continuous problems, DSSRLFLA is mapped into a discrete space, and 24 UCI data sets are also selected to evaluate the performance of the new feature selection method. The experimental results illustrate that this improved method can obtain fewer features and higher classification accuracy than some well-known feature selection methods.