Unconstrained motion compensated temporal filtering (UMCTF) for efficient and flexible interframe wavelet video coding

作者:

Highlights:

摘要

We introduce an efficient and flexible framework for temporal filtering in wavelet-based scalable video codecs called unconstrained motion compensated temporal filtering (UMCTF). UMCTF allows for the use of different filters and temporal decomposition structures through a set of controlling parameters that may be easily modified during the coding process, at different granularities and levels. The proposed framework enables the adaptation of the coding process to the video content, network and end-device characteristics, allows for enhanced scalability, content-adaptivity and reduced delay, while improving the coding efficiency as compared to state-of-the-art motion-compensated wavelet video coders. Additionally, a mechanism for the control of the distortion variation in video coding based on UMCTF employing only the predict step is proposed. The control mechanism is formulated by expressing the distortion in an arbitrary decoded frame, at any temporal level in the pyramid, as a function of the distortions in the reference frames at the same temporal level. All the different scenarios proposed in the paper are experimentally validated through a coding scheme that incorporates advanced features (such as rate-distortion optimized variable block-size multihypothesis prediction and overlapped block motion compensation). Experiments are carried out to determine the relative efficiency of different UMCTF instantiations, as well as to compare against the current state-of-the-art in video coding.

论文关键词:Motion compensated temporal filtering,Wavelet video coding

论文评审过程:Received 1 June 2003, Revised 15 June 2004, Available online 22 September 2004.

论文官网地址:https://doi.org/10.1016/j.image.2004.08.006