Feature fusion of side face and gait for video-based human identification

作者:

Highlights:

摘要

Video-based human recognition at a distance remains a challenging problem for the fusion of multimodal biometrics. As compared to the approach based on match score level fusion, in this paper, we present a new approach that utilizes and integrates information from side face and gait at the feature level. The features of face and gait are obtained separately using principal component analysis (PCA) from enhanced side face image (ESFI) and gait energy image (GEI), respectively. Multiple discriminant analysis (MDA) is employed on the concatenated features of face and gait to obtain discriminating synthetic features. This process allows the generation of better features and reduces the curse of dimensionality. The proposed scheme is tested using two comparative data sets to show the effect of changing clothes and face changing over time. Moreover, the proposed feature level fusion is compared with the match score level fusion and another feature level fusion scheme. The experimental results demonstrate that the synthetic features, encoding both side face and gait information, carry more discriminating power than the individual biometrics features, and the proposed feature level fusion scheme outperforms the match score level and another feature level fusion scheme. The performance of different fusion schemes is also shown as cumulative match characteristic (CMC) curves. They further demonstrate the strength of the proposed fusion scheme.

论文关键词:Face recognition,Gait recognition,Multibiometrics fusion,Video based recognition at a distance

论文评审过程:Received 5 May 2007, Accepted 26 June 2007, Available online 10 July 2007.

论文官网地址:https://doi.org/10.1016/j.patcog.2007.06.019