Linear Global Translation Estimation with Feature Tracks

Zhaopeng Cui, Nianjuan Jiang, Chengzhou Tang and Ping Tan

Abstract

This paper derives a novel linear position constraint for cameras seeing a common scene point, which leads to a direct linear method for global camera translation estimation. Unlike previous solutions, this method deals with collinear camera motion and weak image association at the same time. The final linear formulation does not involve the coordinates of scene points, which makes it efficient even for large scale data. We solve the linear equation based on $L_1$ norm, which makes our system more robust to outliers in essential matrices and feature correspondences. We experiment this method on both sequentially captured images and unordered Internet images. The experiments demonstrate its strength in robustness, accuracy, and efficiency.

Session

Poster 1

Files

PDF iconExtended Abstract (PDF, 263K)
PDF iconPaper (PDF, 3M)
ZIP iconSupplemental Materials (ZIP, 2M)

DOI

10.5244/C.29.46
https://dx.doi.org/10.5244/C.29.46

Citation

Zhaopeng Cui, Nianjuan Jiang, Chengzhou Tang and Ping Tan. Linear Global Translation Estimation with Feature Tracks. In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 46.1-46.13. BMVA Press, September 2015.

Bibtex

@inproceedings{BMVC2015_46,
	title={Linear Global Translation Estimation with Feature Tracks},
	author={Zhaopeng Cui and Nianjuan Jiang and Chengzhou Tang and Ping Tan},
	year={2015},
	month={September},
	pages={46.1-46.13},
	articleno={46},
	numpages={13},
	booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
	publisher={BMVA Press},
	editor={Xianghua Xie, Mark W. Jones, and Gary K. L. Tam},
	doi={10.5244/C.29.46},
	isbn={1-901725-53-7},
	url={https://dx.doi.org/10.5244/C.29.46}
}