A new approach for function approximation incorporating adaptive particle swarm optimization and a priori information

作者:

Highlights:

摘要

In this paper, a new approach coupling adaptive particle swarm optimization (APSO) and a priori information for function approximation problem is proposed to obtain better generalization performance and faster convergence rate. It is well known that gradient-based learning algorithms such as backpropagation (BP) algorithm have good ability of local search, whereas PSO has good ability of global search. Therefore, in the new approach, first, APSO encoding the first-order derivative information of the approximated function is applied to train network to near global minima. Second, with the connection weights produced by APSO, the network is trained with a gradient-based algorithm. Due to combining APSO with local search algorithm and considering a priori information, the new approach has better generalization performance and convergence rate than traditional learning ones. Finally, simulation results are given to verify the efficiency and effectiveness of the proposed approach.

论文关键词:Feedforward neural networks,Function approximation,Particle swarm optimization,A priori information,Gradient-based learning algorithms

论文评审过程:Available online 15 May 2008.

论文官网地址:https://doi.org/10.1016/j.amc.2008.05.025