Learning linear PCA with convex semi-definite programming
作者:
Highlights:
•
摘要
The aim of this paper is to learn a linear principal component using the nature of support vector machines (SVMs). To this end, a complete SVM-like framework of linear PCA (SVPCA) for deciding the projection direction is constructed, where new expected risk and margin are introduced. Within this framework, a new semi-definite programming problem for maximizing the margin is formulated and a new definition of support vectors is established. As a weighted case of regular PCA, our SVPCA coincides with the regular PCA if all the samples play the same part in data compression. Theoretical explanation indicates that SVPCA is based on a margin-based generalization bound and thus good prediction ability is ensured. Furthermore, the robust form of SVPCA with a interpretable parameter is achieved using the soft idea in SVMs. The great advantage lies in the fact that SVPCA is a learning algorithm without local minima because of the convexity of the semi-definite optimization problems. To validate the performance of SVPCA, several experiments are conducted and numerical results have demonstrated that their generalization ability is better than that of regular PCA. Finally, some existing problems are also discussed.
论文关键词:Principal component analysis,Statistical learning theory,Support vector machines,Margin,Maximal margin algorithm,Semi-definite programming,Robustness
论文评审过程:Received 13 September 2005, Revised 11 January 2007, Accepted 18 January 2007, Available online 12 February 2007.
论文官网地址:https://doi.org/10.1016/j.patcog.2007.01.022