Memory-based approaches for eliminating premature convergence in particle swarm optimization
作者:K. Chaitanya, D. V. L. N Somayajulu, P. Radha Krishna
摘要
Particle Swarm Optimization (PSO) is a computational method in which a group of particles moves in search space in search of an optimal solution. During this movement, each particle updates its position and velocity with its best previous position and best position found by the swarm. Though PSO is considered as a potential solution and applied in many areas, it suffers from premature convergence in which all the particles are converged too early, resulting in sub-optimal results. Although there are several techniques to address premature convergence, achieving a higher convergence rate while avoiding premature convergence is still challenging. In this paper, we present two new memory-based variants of PSO for preventing premature convergence. The first technique (PSOMR), augments memory by leveraging the concepts of the Ebbinghaus forgetting curve. The second technique (MS-PSOMR) divides swarm into multiple subswarms. Both techniques use memory to store promising historical values and use them later to avoid premature convergence. The proposed approaches are compared with existing algorithms belonging to a similar category and evaluations on CEC 2010 and CEC 2017 benchmark functions. The results show that both the approaches performed significantly better for the measured metrics and discouraged premature convergence.
论文关键词:Particle swarm optimization, Memory curve, Premature convergence, Sub swarms
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10489-020-02045-z