Track Facial Points in Unconstrained Videos
Xi Peng, Qiong Hu, Junzhou Huang and Dimitris Metaxas
Abstract
Tracking Facial Points in unconstrained videos is challenging due to the non-rigid deformation that changes over time. In this paper, we propose to exploit incremental learning for person-specific alignment in wild conditions. Our approach takes advantage of part-based representation and cascade regression for robust and efficient alignment on each frame. Unlike existing methods that usually rely on models trained offline, we incrementally update the representation subspace and the cascade of regressors in a unified framework to achieve personalized modeling on the fly. To alleviate the drifting issue, the fitting results are evaluated using a deep neural network, where well-aligned faces are picked out to incrementally update the representation and fitting models. Both image and video datasets are employed to valid the proposed method. The results demonstrate the superior performance of our approach compared with existing approaches in terms of fitting accuracy and efficiency.
Session
Posters 2
Files
Extended Abstract (PDF, 1M)
Paper (PDF, 3M)
DOI
10.5244/C.30.129
https://dx.doi.org/10.5244/C.30.129
Citation
Xi Peng, Qiong Hu, Junzhou Huang and Dimitris Metaxas. Track Facial Points in Unconstrained Videos. In Richard C. Wilson, Edwin R. Hancock and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 129.1-129.13. BMVA Press, September 2016.
Bibtex
@inproceedings{BMVC2016_129,
title={Track Facial Points in Unconstrained Videos},
author={Xi Peng, Qiong Hu, Junzhou Huang and Dimitris Metaxas},
year={2016},
month={September},
pages={129.1-129.13},
articleno={129},
numpages={13},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Richard C. Wilson, Edwin R. Hancock and William A. P. Smith},
doi={10.5244/C.30.129},
isbn={1-901725-59-6},
url={https://dx.doi.org/10.5244/C.30.129}
}