BibTeX entry
@PHDTHESIS{201309Xiantong_Zhen,
AUTHOR={Xiantong Zhen},
TITLE={Feature Extraction and Representation for Human Action Recognition},
SCHOOL={University of Sheffield},
MONTH=Sep,
YEAR=2013,
URL={http://www.bmva.org/theses/2013/2013-zhen.pdf},
}
Abstract
Human action recognition, as one of the most important topics in computer vision, has been extensively researched during the last decades; however, it is still regarded as a challenging task especially in realistic scenarios. The difficulties mainly result from the huge intra-class variation, background clutter, occlusions, illumination changes and noise. In this thesis, we aim to enhance human action recognition by feature extraction and representation using both holistic and local methods.
Specifically, we have first proposed three approaches for the holistic representation of actions. In the first approach, we explicitly extract the motion and structure features from video sequences by converting the video representation into a 2D image representation problem; In the second and third approaches, we treat the video sequences as 3D volumes and propose to use spatio-temporal pyramid structures to extract multi-scale global features. Gabor filters and steerable filters are extended to the video domain for holistic representations, which have been demonstrated to be successful for action recognition. With regards to local representations, we have firstly done a comprehensive evaluation on the local methods including the bag-of-words (BoW) model, sparse coding, match kernels and classifiers based on image-to-class (I2C) distances. Motivated by the findings from the evaluation, we have proposed two distinctive algorithms for discriminative dimensionality reduction of local spatiotemporal descriptors. The first algorithm is based on the image-to-class distances, while the second explores the local Gaussians.
We have evaluated the proposed methods by conducting extensive experiments on widely-used human action datasets including the KTH, the IXMAS, the UCF Sports, the UCF YouTube and the HMDB51 datasets. Experimental results show the effectiveness of our methods for action recognition.