Learning compressive sampling via multiscale and steerable support value transform

作者:

Highlights:

摘要

Considering that saliency areas in images play important roles in human visual perception, in this paper we propose a Learning Compressive Sampling (LCS) scheme and its physical implementation for more efficient image acquisition via a new Multiscale and Steerable Support Value Transform (MS2VT). The key idea is to learn geometric saliency maps of images by MS2VT, which is deduced from a mapped least square-support vector machine. Because MS2VT can produce a multiscale, multidirectional, undecimated, dyadic and aliasing transform with shift-invariant and anisotropy properties, the obtained support values can reveal the geometric and saliency information of images. The learned saliency map is then used to formulate a variable density compressive function, to realize a simple, fast and efficient sampling, which aims to allocate more sensing resources to saliency attention areas but fewer to non-salient regions. Several experiments are taken on some natural images and remote sensing images to compare our proposed LCS scheme with traditional samplings when saliency information is not used. Moreover, the performance of MS2VT based saliency detection scheme is also compared with other related saliency detection approaches. The experimental results indicate that it can obtain high quality images, especially in preserving more detailed edges, contours and complex structures, even at low sampling ratios.

论文关键词:Learning compressive sampling,Multiscale and steerable support value transform,Biological visual saliency,Variable density function,Compressive imaging

论文评审过程:Received 15 May 2014, Revised 6 February 2015, Accepted 26 February 2015, Available online 6 March 2015.

论文官网地址:https://doi.org/10.1016/j.knosys.2015.02.028