Adapting Models to Signal Degradation using Distillation

Jong-Chyi Su and Subhransu Maji

Abstract

Model compression and knowledge distillation have been successfully applied for cross-architecture and cross-domain transfer learning. However, a key requirement is that training examples are in correspondence across the domains. We show that in many scenarios of practical importance such aligned data can be synthetically generated using computer graphics pipelines allowing domain adaptation through distillation. We apply this technique to learn models for recognizing low-resolution images using labeled high-resolution images, non-localized objects using labeled localized objects, line-drawings using labeled color images, etc. Experiments on various fine-grained recognition datasets demonstrate that the technique improves recognition performance on the low-quality data and beats strong baselines for domain adaptation.

Session

Posters

Files

PDF iconPaper (PDF)

DOI

10.5244/C.31.21
https://dx.doi.org/10.5244/C.31.21

Citation

Jong-Chyi Su and Subhransu Maji. Adapting Models to Signal Degradation using Distillation. In T.K. Kim, S. Zafeiriou, G. Brostow and K. Mikolajczyk, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 21.1-21.14. BMVA Press, September 2017.

Bibtex

            @inproceedings{BMVC2017_21,
                title={Adapting Models to Signal Degradation using Distillation},
                author={Jong-Chyi Su and Subhransu Maji},
                year={2017},
                month={September},
                pages={21.1-21.14},
                articleno={21},
                numpages={14},
                booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
                publisher={BMVA Press},
                editor={Tae-Kyun Kim, Stefanos Zafeiriou, Gabriel Brostow and Krystian Mikolajczyk},
                doi={10.5244/C.31.21},
                isbn={1-901725-60-X},
                url={https://dx.doi.org/10.5244/C.31.21}
            }