Learning Complementary Saliency Priors for Foreground Object Segmentation in Complex Scenes
作者:Yonghong Tian, Jia Li, Shui Yu, Tiejun Huang
摘要
Object segmentation is widely recognized as one of the most challenging problems in computer vision. One major problem of existing methods is that most of them are vulnerable to the cluttered background. Moreover, human intervention is often required to specify foreground/background priors, which restricts the usage of object segmentation in real-world scenario. To address these problems, we propose a novel approach to learn complementary saliency priors for foreground object segmentation in complex scenes. Different from existing saliency-based segmentation approaches, we propose to learn two complementary saliency maps that reveal the most reliable foreground and background regions. Given such priors, foreground object segmentation is formulated as a binary pixel labelling problem that can be efficiently solved using graph cuts. As such, the confident saliency priors can be utilized to extract the most salient objects and reduce the distraction of cluttered background. Extensive experiments show that our approach outperforms 16 state-of-the-art methods remarkably on three public image benchmarks.
论文关键词:Foreground object segmentation, Visual saliency, Complementary saliency map, Graph cuts
论文评审过程:
论文官网地址:https://doi.org/10.1007/s11263-014-0737-1