On the transformed entropy-constrained vector quantizers employing Mandala block for image coding
作者:
Highlights:
•
摘要
This paper presents two image coding techniques employing an entropy constrained vector quantizer (ECVQ) in the DCT domain. In our approach, the transformed image is rearranged into the Mandala blocks for vector quantization. Each Mandala block is then divided into several smaller vectors with variable dimensions, according to its statistical property, and undergoes the ECVQs which are prepared separately for each Mandala block. While each Mandala block undergoes the unstructured ECVQ in the first technique, the second technique employs a structured ECVQ, i.e., an entropy constrained lattice vector quantizer (ECLVQ). In the ECLVQ, unlike the conventional lattice VQ combined with entropy coding, we take account of both the distortion and entropy in the encoding. Moreover, in order to improve the performance further, the ECLVQ parameters, including the truncation of a lattice and the scale factor, are optimized according to the input image statistics. Also we reduce the size of the variable word-length code table, which grows exponentially with the vector dimension and bit-rate, by grouping the similar codewords. The performances of both techniques are evaluated on the real images, and it is found that the proposed techniques provide 1–2 dB gain over the DCT-classified VQ (Kim and Lee, 1992) in the range of 0.3–0.5 bits per pixel (bpp) and 0.5–1.3 dB gain over the JPEG in the range 0.1–0.5 bpp.
论文关键词:Discrete cosine transform,Vector quantization,Mandala block, Lattice vector quantization,Entropy constrained vector quantization,Entropy constrained lattice vector quantization
论文评审过程:Received 13 January 1994, Available online 7 April 2000.
论文官网地址:https://doi.org/10.1016/0923-5965(94)00043-I