Enhancing the competitive swarm optimizer with covariance matrix adaptation for large scale optimization

Author(s):  
Wei Li ◽  
Zhou Lei ◽  
Junqing Yuan ◽  
Haonan Luo ◽  
Qingzheng Xu
Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jin Jin

For large-scale optimization, CMA-ES has the disadvantages of high complexity and premature stagnation. An improved CMA-ES algorithm called GI-ES was proposed in this paper. For the problem of high complexity, the method in this paper replaces the calculation of a covariance matrix with the modeling of expected fitting degrees for a given covariance matrix. At the same time, to solve the problem of premature stagnation, this paper replaces the historical information of elite individuals with the historical information of all individuals. The information can be seen as approximate gradients. The parameters of the next generation of individuals are generated based on the approximate gradients. The experimental results were tested using CEC 2010 and CEC2013 LSGO benchmark test suite, and the experimental results verified the effectiveness of the algorithm on a number of different tasks.


2020 ◽  
Vol 53 (2) ◽  
pp. 12572-12577
Author(s):  
Fernando Lezama ◽  
Ricardo Faia ◽  
Omid Abrishambaf ◽  
Pedro Faria ◽  
Zita Vale

Author(s):  
Jie Guo ◽  
Zhong Wan

A new spectral three-term conjugate gradient algorithm in virtue of the Quasi-Newton equation is developed for solving large-scale unconstrained optimization problems. It is proved that the search directions in this algorithm always satisfy a sufficiently descent condition independent of any line search. Global convergence is established for general objective functions if the strong Wolfe line search is used. Numerical experiments are employed to show its high numerical performance in solving large-scale optimization problems. Particularly, the developed algorithm is implemented to solve the 100 benchmark test problems from CUTE with different sizes from 1000 to 10,000, in comparison with some similar ones in the literature. The numerical results demonstrate that our algorithm outperforms the state-of-the-art ones in terms of less CPU time, less number of iteration or less number of function evaluation.


Sign in / Sign up

Export Citation Format

Share Document