Towards Deep Style Transfer: A Content-Aware Perspective
Yi-Lei Chen and Chiou-Ting Hsu
Abstract
Modern research has demonstrated that many eye-catching images can be generated by style transfer via deep neural network. There is, however, a dearth of research on content-aware style transfer. In this paper, we generalize the neural algorithm for style transfer from two perspectives: where to transfer and what to transfer. To specify where to transfer, we propose a simple yet effective strategy, named masking out, to constrain the transfer layout. To illustrate what to transfer, we define a new style feature by high-order statistics to better characterize content coherency. Without resorting to additional local matching or MRF models, the proposed method embeds the desired content information, either semantic-aware or saliency-aware, into the original framework seamlessly. Experimental results show that our method is applicable to various types of style transfers and can be extended to image inpainting.
Session
Segmentation
Files
Extended Abstract (PDF, 169K)
Paper (PDF, 1M)
DOI
10.5244/C.30.8
https://dx.doi.org/10.5244/C.30.8
Citation
Yi-Lei Chen and Chiou-Ting Hsu. Towards Deep Style Transfer: A Content-Aware Perspective. In Richard C. Wilson, Edwin R. Hancock and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 8.1-8.11. BMVA Press, September 2016.
Bibtex
@inproceedings{BMVC2016_8,
title={Towards Deep Style Transfer: A Content-Aware Perspective},
author={Yi-Lei Chen and Chiou-Ting Hsu},
year={2016},
month={September},
pages={8.1-8.11},
articleno={8},
numpages={11},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Richard C. Wilson, Edwin R. Hancock and William A. P. Smith},
doi={10.5244/C.30.8},
isbn={1-901725-59-6},
url={https://dx.doi.org/10.5244/C.30.8}
}