On nonlinear dimensionality reduction for face recognition

作者:

Highlights:

摘要

The curse of dimensionality has prompted intensive research in effective methods of mapping high dimensional data. Dimensionality reduction and subspace learning have been studied extensively and widely applied to feature extraction and pattern representation in image and vision applications. Although PCA has long been regarded as a simple, efficient linear subspace technique, many nonlinear methods such as kernel PCA, local linear embedding, and self-organizing networks have been proposed recently for dealing with increasingly complex nonlinear data. The intensive research in nonlinear methods often creates an impression that they are highly superior and preferred, though often limited experiments were given and the results not tested on significance. In this paper, we systematically investigate and compare the capabilities of various linear and nonlinear subspace methods for face representation and recognition. The performances of these methods are analyzed and discussed along with statistical significance tests on obtained results. The experiments on a range of data sets show that nonlinear methods do not always outperform linear ones, especially on data sets containing noise and outliers or having discontinuous or multiple submanifolds. Certain nonlinear methods with certain classifiers do yield better performances consistently than others. However, the differences among them are small and in most cases are not significant. A measure is used to quantify the nonlinearity of a data set in a subspace. It explains that good performances are achievable in reduced dimensions of low degree of nonlinearity.

论文关键词:Dimensionality reduction,Nonlinear manifold,Subspace learning,Principal component analysis,Feature representation,Face recognition

论文评审过程:Received 21 September 2011, Revised 19 January 2012, Accepted 25 March 2012, Available online 11 April 2012.

论文官网地址:https://doi.org/10.1016/j.imavis.2012.03.004