Online Visual Tracking via Coupled Object-Context Dictionary

Mingquan Ye, Hong Chang and Xilin Chen

Abstract

Sparse representation and context information have been extensively applied in visual tracking. In this paper, we make the most of context information outside the target bounding box to construct the distinct background dictionary. The pure target dictionary is then constructed by filtering out background patches from the target bounding box. At each frame, all relevant patches are encoded by the coupled dictionaries. Based on the reconstruction errors, we can efficiently compute the confidence value of each bounding box candidate. By investigating the changes of the reconstruction errors on the coupled dictionaries, we can effectively handle occlusion. Both quantitative and qualitative results demonstrate that the proposed tracker performs favorably compared with several state-of-the-art trackers on some challenging video sequences.

Session

Poster 2

Files

PDF iconExtended Abstract (PDF, 961K)
PDF iconPaper (PDF, 6M)

DOI

10.5244/C.29.165
https://dx.doi.org/10.5244/C.29.165

Citation

Mingquan Ye, Hong Chang and Xilin Chen. Online Visual Tracking via Coupled Object-Context Dictionary. In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 165.1-165.11. BMVA Press, September 2015.

Bibtex

@inproceedings{BMVC2015_165,
	title={Online Visual Tracking via Coupled Object-Context Dictionary},
	author={Mingquan Ye and Hong Chang and Xilin Chen},
	year={2015},
	month={September},
	pages={165.1-165.11},
	articleno={165},
	numpages={11},
	booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
	publisher={BMVA Press},
	editor={Xianghua Xie, Mark W. Jones, and Gary K. L. Tam},
	doi={10.5244/C.29.165},
	isbn={1-901725-53-7},
	url={https://dx.doi.org/10.5244/C.29.165}
}