Image-based rendering and 3D modeling: A complete framework
作者:
Highlights:
•
摘要
Multi-viewpoint synthesis of video data is a key technology for the integration of video and 3D graphics, as necessary for telepresence and augmented-reality applications. This paper describes a number of important techniques which can be employed to accomplish that goal. The techniques presented are based on the analysis of 2D images acquired by two or more cameras. To determine depth information of single objects present in the scene, it is necessary to perform segmentation and disparity estimation. It is shown, how these analysis tools can benefit from each other. For viewpoint synthesis, techniques with different levels of tradeoff between complexity and degrees of freedom are presented. The first approach is disparity-controlled view interpolation, which is capable of generating intermediate views along the interocular axis between two adjacent cameras. The second is the recently introduced incomplete 3D technique, which in a first step extracts the texture of the visible surface of a video object acquired with multiple cameras, and then performs disparity-compensated projection from the surface onto a view plane. In the third and most complex approach, a 3D model of the object is generated, which can be represented by a 3D wire grid. For synthesis, this model can be rotated to arbitrary orientations, and original texture is mapped onto the surface to obtain an arbitrary view of the processed object. The result of this rendering procedure is a virtual image with very natural appearance.
论文关键词:Disparity estimation,Camera calibration,Segmentation,Content-based image synthesis,3D modeling,Rendering
论文评审过程:Received 25 March 1998, Available online 20 June 2000.
论文官网地址:https://doi.org/10.1016/S0923-5965(99)00014-4