A structure optimization framework for feed-forward neural networks using sparse representation
作者:
Highlights:
•
摘要
Traditionally, optimizing the structure of a feed-forward neural-network is time-consuming and it needs to balance the trade-off between the network size and network performance. In this paper, a sparse-representation based framework, termed SRS, is introduced to generate a small-sized network structure without compromising the network performance. Based on the forward selection strategy, the SRS framework selects significant elements (weights or hidden neurons) from the initial network that minimize the residual output error. The main advantage of the SRS framework is that it is able to optimize the network structure and training performance simultaneously. As a result, the training error is reduced while the number of selected elements increases. The efficiency and robustness of the SRS framework are evaluated based on several benchmark datasets. Experimental results indicate that the SRS framework performs favourably compared to alternative structure optimization algorithms.
论文关键词:Neural networks,Structure optimization,Sparse representation,Network pruning,Network construction,Single measurement vector,Multiple measurement vector
论文评审过程:Received 24 November 2015, Revised 15 June 2016, Accepted 19 June 2016, Available online 21 June 2016, Version of Record 3 September 2016.
论文官网地址:https://doi.org/10.1016/j.knosys.2016.06.026