3D wide baseline correspondences using depth-maps

作者:

Highlights:

摘要

Points matching between two or more images of a scene shot from different viewpoints is the crucial step to defining epipolar geometry between views, recover the camera's egomotion or build a 3D model of the framed scene. Unfortunately in most of the common cases robust correspondences between points in different images can be defined only when small variations in viewpoint position, focal length or lighting are present between images. In all the other conditions ad hoc assumptions on the 3D scene or just weak correspondences through statistical approaches can be used. In this paper, we present a novel matching method where depth-maps, nowadays available from cheap and off the shelf devices, are integrated with 2D images to provide robust descriptors even when wide baseline or strong lighting variations are present. We show how depth information can highly improve matching in wide-baseline contexts with respect to state-of-the-art descriptors for simple images.

论文关键词:Machine vision,Feature extraction,3D descriptors

论文评审过程:Available online 9 February 2012.

论文官网地址:https://doi.org/10.1016/j.image.2012.01.009