You Said That?: Synthesising Talking Faces from Audio
作者:Amir Jamaludin, Joon Son Chung, Andrew Zisserman
摘要
We describe a method for generating a video of a talking face. The method takes still images of the target face and an audio speech segment as inputs, and generates a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we develop an encoder–decoder convolutional neural network (CNN) model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model is trained on unlabelled videos using cross-modal self-supervision. We also propose methods to re-dub videos by visually blending the generated face into the source video frame using a multi-stream CNN model.
论文关键词:Computer vision, Machine learning, Visual speech synthesis, Video synthesis
论文评审过程:
论文官网地址:https://doi.org/10.1007/s11263-019-01150-y