OnionNet: Sharing Features in Cascaded Deep Classi_ers
Martin Simonovsky and Nikos Komodakis
Abstract
The focus of our work is speeding up evaluation of deep neural networks in retrieval scenarios, where conventional architectures may spend too much time on negative examples. We propose to replace a monolithic network with our novel cascade of feature-sharing deep classifiers, called OnionNet, where subsequent stages may add both new layers as well as new feature channels to the previous ones. Importantly, intermediate feature maps are shared among classifiers, preventing them from the necessity of being recomputed. To accomplish this, the model is trained end-to-end in a principled way under a joint loss. We validate our approach in theory and on a synthetic benchmark. As a result demonstrated in three applications (patch matching, object detection, and image retrieval), our cascade can operate significantly faster than both monolithic networks and traditional cascades without sharing at the cost of marginal decrease in precision.
Session
Posters 2
Files
Extended Abstract (PDF, 67K)
Paper (PDF, 262K)
DOI
10.5244/C.30.79
https://dx.doi.org/10.5244/C.30.79
Citation
Martin Simonovsky and Nikos Komodakis. OnionNet: Sharing Features in Cascaded Deep Classi_ers. In Richard C. Wilson, Edwin R. Hancock and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 79.1-79.13. BMVA Press, September 2016.
Bibtex
@inproceedings{BMVC2016_79,
title={OnionNet: Sharing Features in Cascaded Deep Classi_ers},
author={Martin Simonovsky and Nikos Komodakis},
year={2016},
month={September},
pages={79.1-79.13},
articleno={79},
numpages={13},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Richard C. Wilson, Edwin R. Hancock and William A. P. Smith},
doi={10.5244/C.30.79},
isbn={1-901725-59-6},
url={https://dx.doi.org/10.5244/C.30.79}
}