Testing the performance of spoken dialogue systems by means of an artificially simulated user

作者:Ramón López-Cózar, Zoraida Callejas, Michael McTear

摘要

This paper proposes a new technique to test the performance of spoken dialogue systems by artificially simulating the behaviour of three types of user (very cooperative, cooperative and not very cooperative) interacting with a system by means of spoken dialogues. Experiments using the technique were carried out to test the performance of a previously developed dialogue system designed for the fast-food domain and working with two kinds of language model for automatic speech recognition: one based on 17 prompt-dependent language models, and the other based on one prompt-independent language model. The use of the simulated user enables the identification of problems relating to the speech recognition, spoken language understanding, and dialogue management components of the system. In particular, in these experiments problems were encountered with the recognition and understanding of postal codes and addresses and with the lengthy sequences of repetitive confirmation turns required to correct these errors. By employing a simulated user in a range of different experimental conditions sufficient data can be generated to support a systematic analysis of potential problems and to enable fine-grained tuning of the system.

论文关键词:Spoken dialogue systems, Speech recognition, Speech understanding, User simulation, Artificial intelligence, Natural language processing, Robust human–computer interaction

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10462-007-9059-9