Adapting Object Detectors from Images to Weakly Labeled Videos
Omit Chanda, Eu Wern Teh, Mrigank Rochan, Zhenyu Guo and Yang Wang
Abstract
Due to the domain shift between images and videos, standard object detectors trained
on images usually do not perform well on videos. At the same time, it is difficult to directly train object detectors from video data due to the lack of labeled video datasets.
In this paper, we consider the problem of localizing objects in weakly labeled videos.
A video is weakly labeled if we know the presence/absence of an object in a video (or each frame), but we do not know the exact spatial location. In addition to weakly labeled videos, we assume access to a set of fully labeled images. We incorporate domain
adaptation in our framework and adapt the information from the labeled images (source domain) to the weakly labeled videos (target domain). Our experimental results on standard benchmark datasets demonstrate the effectiveness of our proposed approach.
Session
Posters
Files
Paper (PDF)
DOI
10.5244/C.31.56
https://dx.doi.org/10.5244/C.31.56
Citation
Omit Chanda, Eu Wern Teh, Mrigank Rochan, Zhenyu Guo and Yang Wang. Adapting Object Detectors from Images to Weakly Labeled Videos. In T.K. Kim, S. Zafeiriou, G. Brostow and K. Mikolajczyk, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 56.1-56.12. BMVA Press, September 2017.
Bibtex
@inproceedings{BMVC2017_56,
title={Adapting Object Detectors from Images to Weakly Labeled Videos},
author={Omit Chanda, Eu Wern Teh, Mrigank Rochan, Zhenyu Guo and Yang Wang},
year={2017},
month={September},
pages={56.1-56.12},
articleno={56},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Tae-Kyun Kim, Stefanos Zafeiriou, Gabriel Brostow and Krystian Mikolajczyk},
doi={10.5244/C.31.56},
isbn={1-901725-60-X},
url={https://dx.doi.org/10.5244/C.31.56}
}