Unlabelled 3D Motion Examples Improve Cross-View Action Recognition

Ankur Gupta, Alireza Shafaei, James Little and Robert Woodham

In Proceedings British Machine Vision Conference 2014
http://dx.doi.org/10.5244/C.28.46

Abstract

We demonstrate a novel strategy for unsupervised cross-view action recognition using multi-view feature synthesis. We do not rely on cross-view video annotations to transfer knowledge across views but use local features generated using motion capture data to learn the feature transformation. Motion capture data allows us to build a correspondence between two synthesized views at the feature level. We learn a feature mapping scheme for each view change by making a naive assumption that all features transform independently. This assumption along with access to exact feature correspondences dramatically simplifies learning. With this learned mapping we are able to �hallucinate� action descriptors corresponding to different viewpoints. This simple approach effectively models the transformation of BoW based action descriptors under viewpoint change and outperforms state of the art on the INRIA IXMAS dataset.

Session

Poster Session

Files

Extended Abstract (PDF, 1 page, 282K)
Paper (PDF, 11 pages, 992K)
Bibtex File

Citation

Ankur Gupta, Alireza Shafaei, James Little, and Robert Woodham. Unlabelled 3D Motion Examples Improve Cross-View Action Recognition. Proceedings of the British Machine Vision Conference. BMVA Press, September 2014.

BibTex

@inproceedings{BMVC.28.46
	title = {Unlabelled 3D Motion Examples Improve Cross-View Action Recognition},
	author = {Gupta, Ankur and Shafaei, Alireza and Little, James and Woodham, Robert},
	year = {2014},
	booktitle = {Proceedings of the British Machine Vision Conference},
	publisher = {BMVA Press},
	editors = {Valstar, Michel and French, Andrew and Pridmore, Tony}
	doi = { http://dx.doi.org/10.5244/C.28.46 }
}