Adaptive hash retrieval with kernel based similarity
作者:
Highlights:
• We explore the idea of using normalization on the Gaussian kernel, and use this to construct a new similarity function which gives consistent retrieval results. The normalization takes the local distribution of data into consideration, and is suitable for k nearest neighbour search. This new similarity function is proved to give a positive semidefinitive (PSD) kernel.
• We present two unsupervised hashing methods which aim at reconstructing the kernel function using binary codes. The first method takes the global similarity of measure given by the kernel function into consideration. The second method is local and focuses on the similarity of pairs of data and can better capture semantically meaningful manifold structure. Both of these hashing methods give a training time which is linear with respect to the size of the training set, and gives a constant time for indexing using the proposed hash function. Moreover, we present a supervised hashing scheme based on subspace learning to improve semantic retrieval performance.
• Our third and final contribution is to make use of supervised information (class labels) based on the former unsupervised hashing framework to achieve semantically better results.
摘要
•We explore the idea of using normalization on the Gaussian kernel, and use this to construct a new similarity function which gives consistent retrieval results. The normalization takes the local distribution of data into consideration, and is suitable for k nearest neighbour search. This new similarity function is proved to give a positive semidefinitive (PSD) kernel.•We present two unsupervised hashing methods which aim at reconstructing the kernel function using binary codes. The first method takes the global similarity of measure given by the kernel function into consideration. The second method is local and focuses on the similarity of pairs of data and can better capture semantically meaningful manifold structure. Both of these hashing methods give a training time which is linear with respect to the size of the training set, and gives a constant time for indexing using the proposed hash function. Moreover, we present a supervised hashing scheme based on subspace learning to improve semantic retrieval performance.•Our third and final contribution is to make use of supervised information (class labels) based on the former unsupervised hashing framework to achieve semantically better results.
论文关键词:Hashing,K-NN,Kernel,Binary indexing,Normalized Euclidean distance
论文评审过程:Received 11 September 2016, Revised 17 March 2017, Accepted 20 March 2017, Available online 30 March 2017, Version of Record 21 November 2017.
论文官网地址:https://doi.org/10.1016/j.patcog.2017.03.020