Fast adaptive algorithms and networks for class-separability features

作者:

Highlights:

摘要

In this article, we introduce accelerated algorithms for linear discriminant analysis (LDA) and feature extraction from unimodal multiclass Gaussian data. Current adaptive methods based on the gradient descent optimization technique use a fixed or a monotonically decreasing step size in each iteration, which results in a slow convergence rate. Here, we use a variable step size, optimally computed in each iteration using the steepest descent method, in order to accelerate the convergence of the algorithm. Based on the new adaptive algorithm, we present a self-organizing neural network for adaptive computation of the square root of the inverse covariance matrix (Σ−1/2) and use it (i) in a network for optimal feature extraction from Gaussian data and (ii) in cascaded form with a principal component analysis network for LDA. Experimental results demonstrate fast convergence and high stability of the algorithm and justify its advantages for on-line pattern recognition applications with stationary and non-stationary input data.

论文关键词:Linear discriminant analysis,Principal component analysis,Feature extraction,Gradient descent optimization,Steepest descent optimization,Self-organizing neural network,Adaptive algorithms,Convergence analysis

论文评审过程:Received 25 April 2002, Revised 12 November 2002, Accepted 12 November 2002, Available online 5 April 2003.

论文官网地址:https://doi.org/10.1016/S0031-3203(03)00006-2