Adaptive Transductive Transfer Machine
In Proceedings British Machine Vision Conference 2014
http://dx.doi.org/10.5244/C.28.60
Abstract
Classification methods traditionally work under the assumption that the training and test sets are sampled from similar distributions (domains). However, when such methods are deployed in practise, the conditions in which test data is acquired do not exactly match those of the training set. In this paper, we exploit the fact that it is often possible to gather unlabeled samples from a test/target domain in order to improve the model built from the training source set. We propose Adaptive Transductive Transfer Machines, which approach this problem by combining four types of adaptation: a lower dimensional space that is shared between the two domains, a set of local transformations to further increase the domain similarity, a classifier parameter adaptation method which modifies the learner for the new domain and a set of class-conditional transformations aiming to increase the similarity between the posterior probability of samples in the source and target sets. We show that our pipeline leads to an improvement over the state-of-the-art in cross-domain image classification datasets, using raw images or basic features.
Session
Poster Session
Files
Extended Abstract (PDF, 1 page, 332K)Paper (PDF, 12 pages, 386K)
Bibtex File
Citation
Nazli Farajidavar, Teofilo deCampos, and Josef Kittler. Adaptive Transductive Transfer Machine. Proceedings of the British Machine Vision Conference. BMVA Press, September 2014.
BibTex
@inproceedings{BMVC.28.60 title = {Adaptive Transductive Transfer Machine}, author = {Farajidavar, Nazli and deCampos, Teofilo and Kittler, Josef}, year = {2014}, booktitle = {Proceedings of the British Machine Vision Conference}, publisher = {BMVA Press}, editors = {Valstar, Michel and French, Andrew and Pridmore, Tony} doi = { http://dx.doi.org/10.5244/C.28.60 } }