Towards a low bandwidth talking face using appearance models

B Theobald, G Cawley, S Kruse and J A Bangham

The paper is motivated by the need to develop low bandwidth virtual humans capable of delivering audio-visual speech and sign language at a quality comparable to high bandwidth video. The number of bits required for animating a virtual human is significantly reduced by using an appearance model combined with parameter compression. A new perceptual method is introduced and used to evaluate the quality of the synthesised sequences. It appears that 3.6 kbits/s can still yield acceptable quality.

PDF version

Home Contents Author index Keyword index

Valid CSS! Valid HTML 4.01!

This document produced for BMVC 2001