Object localization in ImageNet by looking out of the window
Alexander Vezhnevets and Vittorio Ferrari
Abstract
We propose a method for annotating the location of objects in ImageNet. Traditionally, this is cast as an image window classification problem, where each window is considered independently and scored based on its appearance alone. Instead, we propose a method which scores each candidate window in the context of all other windows in the image, taking into account their similarity in appearance space as well as their spatial relations in the image plane. We devise a fast and exact procedure to optimize our scoring function over all candidate windows in an image, and we learn its parameters using structured output regression. We demonstrate on 92000 images from ImageNet that this significantly improves localization over recent techniques that score windows in isolation.
Session
Poster 1
Files
Extended Abstract (PDF, 2M)
Paper (PDF, 4M)
DOI
10.5244/C.29.27
https://dx.doi.org/10.5244/C.29.27
Citation
Alexander Vezhnevets and Vittorio Ferrari. Object localization in ImageNet by looking out of the window. In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 27.1-27.12. BMVA Press, September 2015.
Bibtex
@inproceedings{BMVC2015_27,
title={Object localization in ImageNet by looking out of the window},
author={Alexander Vezhnevets and Vittorio Ferrari},
year={2015},
month={September},
pages={27.1-27.12},
articleno={27},
numpages={12},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Xianghua Xie, Mark W. Jones, and Gary K. L. Tam},
doi={10.5244/C.29.27},
isbn={1-901725-53-7},
url={https://dx.doi.org/10.5244/C.29.27}
}