You said that?
Joon Son Son, Amir Jamaludin and Andrew Zisserman
Abstract
We present a method for generating a video of a talking face. The method takes as
inputs: (i) still images of the target face, and (ii) an audio speech segment; and outputs a
video of the target face lip synched with the audio. The method runs in real time and is
applicable to faces and audio not seen at training time.
To achieve this we propose an encoder-decoder CNN model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model
is trained on tens of hours of unlabelled videos.
We also show results of re-dubbing videos using speech from a different person.
Session
Orals - Face Analysis
Files
Paper (PDF)
Supplementary (PDF)
DOI
10.5244/C.31.109
https://dx.doi.org/10.5244/C.31.109
Citation
Joon Son Son, Amir Jamaludin and Andrew Zisserman. You said that?. In T.K. Kim, S. Zafeiriou, G. Brostow and K. Mikolajczyk, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 109.1-109.12. BMVA Press, September 2017.
Bibtex
@inproceedings{BMVC2017_109,
title={You said that?},
author={Joon Son Son, Amir Jamaludin and Andrew Zisserman},
year={2017},
month={September},
pages={109.1-109.12},
articleno={109},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Tae-Kyun Kim, Stefanos Zafeiriou, Gabriel Brostow and Krystian Mikolajczyk},
doi={10.5244/C.31.109},
isbn={1-901725-60-X},
url={https://dx.doi.org/10.5244/C.31.109}
}