Learning to lip read words by watching videos

作者:

Highlights:

摘要

Our aim is to recognise the words being spoken by a talking face, given only the video but not the audio. Existing works in this area have focussed on trying to recognise a small number of utterances in controlled environments (e.g. digits and alphabets), partially due to the shortage of suitable datasets.We make three novel contributions: first, we develop a pipeline for fully automated data collection from TV broadcasts. With this we have generated a dataset with over a million word instances, spoken by over a thousand different people; second, we develop a two-stream convolutional neural network that learns a joint embedding between the sound and the mouth motions from unlabelled data. We apply this network to the tasks of audio-to-video synchronisation and active speaker detection; third, we train convolutional and recurrent networks that are able to effectively learn and recognize hundreds of words from this large-scale dataset.In lip reading and in speaker detection, we demonstrate results that exceed the current state-of-the-art on public benchmark datasets.

论文关键词:

论文评审过程:Received 29 April 2017, Revised 24 December 2017, Accepted 1 February 2018, Available online 6 February 2018, Version of Record 12 December 2018.

论文官网地址:https://doi.org/10.1016/j.cviu.2018.02.001