Hardware implementation methods in Random Vector Functional-Link Networks
作者:José M. Martínez-Villena, Alfredo Rosado-Muñoz, Emilio Soria-Olivas
摘要
Recently appeared a renewed interest for Single Layer Feedforward Neural Network (SLF-NN) models where the hidden layer coefficients are randomly assigned and the output coefficients are calculated by a least square algorithm. In addition to random coefficient initialization, the main advantages for these learning models are the speed of training (no multiple iterations required) and no initial coefficient definition (e.g. no adaptation constant as in multilayer perceptron). These features are adequate for real time operation since a fast online training can be achieved, benefiting to applications (industrial, automotive, portable systems) where other neural networks learning approaches could not be used due to large resource usage, low speed and lack of flexibility. Thus, targeting hardware implementation allows its use in embedded systems, expanding its application areas to real time systems and, in general, those applications where the use of desktop computers is not possible. Typically, RVFLN demands a wide number of resources and a high computational burden; high dimension matrices are involved, and computation intensive algorithms are required to obtain the output layer coefficient values for the neural network, especially matrix inversion. This work describes the algorithm implementation and optimization of these models to fit embedded hardware system requirements together with a parameterizable model, allowing different applications to benefit from it. The proposal includes the use of fuzzy activation functions in neurons to reduce computations. An exhaustive analysis of three proposed different computation architectures for the learning algorithm is done. Classification results for three standard datasets and fixed point arithmetic are compared to Matlab floating point results, together with hardware related analysis as speed of operation, bit-length accuracy in fixed point arithmetic and logic resource occupation.
论文关键词:Random Vector Functional-Link Networks, Fast learning, Matrix inversion, Neural network training, VHDL, Embedded and real-time systems
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10489-013-0501-1