Sparse Bayesian approach for metric learning in latent space
作者:
Highlights:
•
摘要
This paper presents a new and efficient approach for metric learning in latent space. Our method discovers an optimal mapping from the feature space to a latent space that shrinks the distance between similar data items and also increases the distance between dissimilar ones. The proposed approach is based on a Bayesian variational framework which iteratively finds the optimal posterior distribution of parameters and hyperparameters of the model. Advantages of the proposed method to similar work are 1) Learning the noise of the latent variables on the low-dimensional manifold to find a more effective transformation. 2) Automatically finding the dimension of latent space and sparsification of the solution which prevents the overfitting problem. 3) Unlike Mahalanobis metric learning, the proposed algorithm roughly scales linearly to the dimension of data. Also, the present work is extended for learning in the feature space induced by an RKHS kernel. The proposed method is evaluated on small and large datasets coming from real applications such as network intrusion detection, face recognition, handwritten digits, letter recognition, and hyperspectral image classification. The results show that our method outperforms related representative and state-of-the-art methods in many small and large datasets.
论文关键词:Metric learning,Sparse Bayesian learning,Latent space,Variational inference
论文评审过程:Received 24 March 2018, Revised 15 April 2019, Accepted 16 April 2019, Available online 29 April 2019, Version of Record 4 June 2019.
论文官网地址:https://doi.org/10.1016/j.knosys.2019.04.009