Unified formulation of linear discriminant analysis methods and optimal parameter selection

作者:

Highlights:

摘要

In the last decade, many variants of classical linear discriminant analysis (LDA) have been developed to tackle the under-sampled problem in face recognition. However, choosing the variants is not easy since these methods involve eigenvalue decomposition that makes cross-validation computationally expensive. In this paper, we propose to solve this problem by unifying these LDA variants in one framework: principal component analysis (PCA) plus constrained ridge regression (CRR). In CRR, one selects the target (also called class indicator) for each class, and finds a projection to locate the class centers at their class targets and the transform minimizes the within-class distances with a penalty on the transform norm as in ridge regression. Under this framework, many existing LDA methods can be viewed as PCA+CRR with particular regularization numbers and class indicators and we propose to choose the best LDA method as choosing the best member from the CRR family. The latter can be done by comparing their leave-one-out (LOO) errors and we present an efficient algorithm, which requires similar computations to the training process of CRR, to evaluate the LOO errors. Experiments on Yale Face B, Extended Yale B and CMU-PIE databases are conducted to demonstrate the effectiveness of the proposed methods.

论文关键词:Linear discriminant analysis,Model selection,Under-sampled problem,Face recognition,Principal component analysis,Constrained ridge regression

论文评审过程:Received 24 November 2009, Revised 25 July 2010, Accepted 23 August 2010, Available online 30 August 2010.

论文官网地址:https://doi.org/10.1016/j.patcog.2010.08.026