3-D Reconstruction of Urban Scenes from Image Sequences
作者:
Highlights:
•
摘要
In this paper, we address the problem of the recovery of a realistic textured model of a scene from a sequence of images, without any prior knowledge either about the parameters of the cameras or about their motion. We do not require any knowledge of the absolute coordinates of some control points in the scene to achieve this goal. First, using various computer vision tools, we establish correspondences between the images and recover the epipolar geometry, from which we show how to compute the complete set of perspective projection matrices for all camera positions. Then, we proceed to reconstruct the geometry of the scene. We show how to rely on information of the scene such as parallel lines or known angles in order to reconstruct the geometry of the scene up to, respectively, an unknown affine transformation or an unknown similitude. Alternatively, if this information is not available, we can still recover the Euclidean structure of the scene through the techniques of self-calibration. The scene geometry is modeled as a set of polyhedra. Textures to be mapped on the scene polygons are extracted automatically from the images. We show how several images can be combined through mosaicing in order to automatically remove visual artifacts such as pedestrians or trees from the textures.This vision system has been implemented as a vision server, which provides to a CAD-CAM modeler geometry or texture information extracted from the set of images. The whole system allows efficient and fast production of scene models of high quality for such applications as simulation, virtual, or augmented reality.
论文关键词:
论文评审过程:Received 15 October 1996, Accepted 15 August 1997, Available online 10 April 2002.
论文官网地址:https://doi.org/10.1006/cviu.1998.0665