Learning to Detect and Match Keypoints with Deep Architectures

Hani Altwaijry, Andreas Veit and Serge Belongie

Abstract

Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.

Session

Posters 1

Files

PDF iconExtended Abstract (PDF, 4M)
PDF iconPaper (PDF, 18M)

DOI

10.5244/C.30.49
https://dx.doi.org/10.5244/C.30.49

Citation

Hani Altwaijry, Andreas Veit and Serge Belongie. Learning to Detect and Match Keypoints with Deep Architectures. In Richard C. Wilson, Edwin R. Hancock and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 49.1-49.12. BMVA Press, September 2016.

Bibtex

        @inproceedings{BMVC2016_49,
        	title={Learning to Detect and Match Keypoints with Deep Architectures},
        	author={Hani Altwaijry, Andreas Veit and Serge Belongie},
        	year={2016},
        	month={September},
        	pages={49.1-49.12},
        	articleno={49},
        	numpages={12},
        	booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
        	publisher={BMVA Press},
        	editor={Richard C. Wilson, Edwin R. Hancock and William A. P. Smith},
        	doi={10.5244/C.30.49},
        	isbn={1-901725-59-6},
        	url={https://dx.doi.org/10.5244/C.30.49}
        }