The LBG-U Method for Vector Quantization – an Improvement over LBG Inspired from Neural Networks
作者:Bernd Fritzke
摘要
A new vector quantization method (LBG-U) closely related to a particular class of neural network models (growing self-organizing networks) is presented. LBG-U consists mainly of repeated runs of the well-known LBG algorithm. Each time LBG converges, however, a novel measure of utility is assigned to each codebook vector. Thereafter, the vector with minimum utility is moved to a new location, LBG is run on the resulting modified codebook until convergence, another vector is moved, and so on. Since a strictly monotonous improvement of the LBG-generated codebooks is enforced, it can be proved that LBG-U terminates in a finite number of steps. Experiments with artificial data demonstrate significant improvements in terms of RMSE over LBG combined with only modestly higher computational costs.
论文关键词:codebook construction, data compression, growing neural networks, LBG, vector quantization
论文评审过程:
论文官网地址:https://doi.org/10.1023/A:1009653226428