Modeling of 2D+1 texture movies for video coding

作者:

Highlights:

摘要

We propose a novel model-based coding system for video. Model-based coding aims at improving compression gain by replacing the non-informative image elements with some perceptually equivalent models. Images enclosing large textured regions are ideal candidates. Texture movies are obtained by filming a static texture with a moving camera. The integration of the motion information within the generative texture process allows to replace the ‘real’ texture with a ‘visually equivalent’ synthetic one, while preserving the correct motion perception. Global motion estimation is used to determine the movement of the camera and to identify the overlapping region between two successive frames. Such an information is then exploited for the generation of the texture movies. The proposed method for synthesizing 2D+1 texture movies is able to emulate any piece-wise linear trajectory. The compression performance is very encouraging. On this kind of video sequences, the proposed method improves the compression rate of an MPEG4 state-of-the-art video coder of an order of magnitude while providing a sensibly better perceptual quality. Importantly, the current implementation is real-time on Intel PIII processors.

论文关键词:Model-based coding,Dynamic textures,Dynamic coding

论文评审过程:Available online 24 December 2002.

论文官网地址:https://doi.org/10.1016/S0262-8856(02)00132-4