Top down saliency estimation via superpixel-based discriminative dictionaries

Aysun Kocak, Kemal Cizmeciler, Aykut Erdem and Erkut Erdem

In Proceedings British Machine Vision Conference 2014
http://dx.doi.org/10.5244/C.28.73

Abstract

Predicting where humans look in images has gained significant popularity in recent years. In this work, we present a novel method for learning top-down visual saliency, which is well-suited to locate objects of interest in complex scenes. During training, we jointly learn a superpixel based class-specific dictionary and a Conditional Random Field (CRF). While using such a discriminative dictionary helps to distinguish target objects from the background, performing the computations at the superpixel level allows us to improve accuracy of object localizations. Experimental results on the Graz-02 and PASCAL VOC 2007 datasets show that the proposed approach is able to achieve state-of-the-art results and provides much better saliency maps.

Session

Poster Session

Files

Extended Abstract (PDF, 1 page, 240K)
Paper (PDF, 12 pages, 1.2M)
Bibtex File

Citation

Aysun Kocak, Kemal Cizmeciler, Aykut Erdem, and Erkut Erdem. Top down saliency estimation via superpixel-based discriminative dictionaries. Proceedings of the British Machine Vision Conference. BMVA Press, September 2014.

BibTex

@inproceedings{BMVC.28.73
	title = {Top down saliency estimation via superpixel-based discriminative dictionaries},
	author = {Kocak, Aysun and Cizmeciler, Kemal and Erdem, Aykut and Erdem, Erkut},
	year = {2014},
	booktitle = {Proceedings of the British Machine Vision Conference},
	publisher = {BMVA Press},
	editors = {Valstar, Michel and French, Andrew and Pridmore, Tony}
	doi = { http://dx.doi.org/10.5244/C.28.73 }
}