Discriminative training via minimization of risk estimates based on Parzen smoothing
作者:Erik McDermott, Shigeru Katagiri
摘要
We describe a new approach to estimating classification risk in the domain of a suitably defined transformation that can be used as the basis for optimization of generic pattern recognition systems, including hidden Markov models and Multi-Layer Perceptrons. The two formulations of risk estimate described here are closely tied to the Minimum Classification Error/Generalized Probabilistic Descent (MCE/GPD) framework for discriminative training that is well-known to the speech recognition community. In the new approach, high-dimensional and possibly variable-length training tokens are mapped to the centers of Parzen kernels which are then easily integrated to find the risk estimate. The utility of such risk estimates lies in the fact that they are explicit functions of the system parameters and hence suitable for use in practical optimization methods. The use of Parzen estimation makes it possible to establish convergence of the risk estimate to the true theoretical classification risk, a result that formally expresses the benefit of linking the degree of smoothing to the training set size. Convergence of the minimized risk estimate is also analyzed. The new approach establishes a more general theoretical foundation for discriminative training than existed before, supporting previous work and suggesting new variations for future work.
论文关键词:Risk estimation, Discriminative training, Speech recognition
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10489-006-8865-0