Pushing the boundaries of audiovisual word recognition using Residual Networks and LSTMs

作者:

Highlights:

摘要

Visual and audiovisual speech recognition are witnessing a renaissance which is largely due to the advent of deep learning methods. In this paper, we present a deep learning architecture for lipreading and audiovisual word recognition, which combines Residual Networks equipped with spatiotemporal input layers and Bidirectional LSTMs. The lipreading architecture attains 11.92% misclassification rate on the challenging Lipreading-In-The-Wild database, which is composed of excerpts from BBC-TV, each containing one of the 500 target words. Audiovisual experiments are performed using both intermediate and late integration, as well as several types and levels of environmental noise, and notable improvements over the audio-only network are reported, even in the case of clean speech. A further analysis on the utility of target word boundaries is provided, as well as on the capacity of the network in modeling the linguistic context of the target word. Finally, we examine difficult word pairs and discuss how visual information helps towards attaining higher recognition accuracy.

论文关键词:

论文评审过程:Received 11 May 2018, Revised 7 August 2018, Accepted 14 October 2018, Available online 1 November 2018, Version of Record 6 December 2018.

论文官网地址:https://doi.org/10.1016/j.cviu.2018.10.003