Real-time view synthesis from a sparse set of views
作者:
Highlights:
•
摘要
It is known that the pure light field approach for view synthesis relies on a large number of image samples to produce anti-aliased renderings. Otherwise, the insufficiency of image sampling needs to be compensated for by geometry sampling. Currently, geometry estimation is done either offline or using dedicated hardware. Our solution to this dilemma is based on three key ideas: a formal analysis of the equivalency between light field rendering and plane-based warping, multi focus imaging in a multi camera system by plane sweeping, and the fusion of the multi focus image using multi view stereo. The essence of our method is to perform necessary depth estimation up to the level required by the minimal joint image-geometry sampling rate using off-the-shelf graphics hardware. As a result, real-time anti-aliased light field rendering is achieved even if the image samples are insufficient.
论文关键词:Image-based rendering,Light field,Lumigraph,Plenoptic sampling,Fragment shading,Multi view stereo
论文评审过程:Received 5 December 2006, Accepted 7 December 2006, Available online 23 December 2006.
论文官网地址:https://doi.org/10.1016/j.image.2006.12.003