Multi-Task Transfer Methods to Improve One-Shot Learning for Multimedia Event Detection
Wang Yan, Jordan Yap and Greg Mori
Abstract
Learning a model for complex video event detection from only one positive sample is a challenging and important problem in practice, yet seldom has been addressed. This paper proposes a new one-shot learning method based on multi-task learning to address this problem. Information from external relevant events is utilized to overcome the paucity of positive samples for the given event. Relevant events are identified implicitly and are emphasized more in the training. Moreover, a new dataset focusing on personal video search is collected. Experiments on both TRECVid Multimedia Event Detection video set and the new dataset verify the efficacy of the proposed methods.
Session
Poster 1
Files
Extended Abstract (PDF, 543K)
Paper (PDF, 7M)
DOI
10.5244/C.29.37
https://dx.doi.org/10.5244/C.29.37
Citation
Wang Yan, Jordan Yap and Greg Mori. Multi-Task Transfer Methods to Improve One-Shot Learning for Multimedia Event Detection. In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 37.1-37.13. BMVA Press, September 2015.
Bibtex
@inproceedings{BMVC2015_37,
title={Multi-Task Transfer Methods to Improve One-Shot Learning for Multimedia Event Detection},
author={Wang Yan and Jordan Yap and Greg Mori},
year={2015},
month={September},
pages={37.1-37.13},
articleno={37},
numpages={13},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Xianghua Xie, Mark W. Jones, and Gary K. L. Tam},
doi={10.5244/C.29.37},
isbn={1-901725-53-7},
url={https://dx.doi.org/10.5244/C.29.37}
}