Commute time guided transformation for feature extraction
作者:
Highlights:
•
摘要
This paper presents a random-walk-based feature extraction method called commute time guided transformation (CTG) in the graph embedding framework. The paper contributes to the corresponding field in two aspects. First, it introduces the usage of a robust probability metric, i.e., the commute time (CT), to extract visual features for face recognition via a manifold way. Second, the paper designs the CTG optimization to find linear orthogonal projections that would implicitly preserve the commute time of high dimensional data in a low dimensional subspace. Compared with previous CT embedding algorithms, the proposed CTG is a graph-independent method. Existing CT embedding methods are graph-dependent that could only embed the data on the training graph in the subspace. Differently, CTG paradigm can be used to project the out-of-sample data into the same embedding space as the training graph. Moreover, CTG projections are robust to the graph topology that it can always achieve good recognition performance in spite of different initial graph structures. Owing to these positive properties, when applied to face recognition, the proposed CTG method outperforms other state-of-the-art algorithms on benchmark datasets. Specifically, it is much efficient and effective to recognize faces with noise.
论文关键词:
论文评审过程:Received 15 January 2010, Accepted 8 November 2011, Available online 23 November 2011.
论文官网地址:https://doi.org/10.1016/j.cviu.2011.11.002