Data-free Parameter Pruning for Deep Neural Networks

Suraj Srinivas and R. Venkatesh Babu

Abstract

Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85% of the total parameters in an MNIST-trained network, and about 35% for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.

Session

Poster 1

Files

PDF iconExtended Abstract (PDF, 153K)
PDF iconPaper (PDF, 320K)

DOI

10.5244/C.29.31
https://dx.doi.org/10.5244/C.29.31

Citation

Suraj Srinivas and R. Venkatesh Babu. Data-free Parameter Pruning for Deep Neural Networks. In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 31.1-31.12. BMVA Press, September 2015.

Bibtex

@inproceedings{BMVC2015_31,
	title={Data-free Parameter Pruning for Deep Neural Networks},
	author={Suraj Srinivas and R. Venkatesh Babu},
	year={2015},
	month={September},
	pages={31.1-31.12},
	articleno={31},
	numpages={12},
	booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
	publisher={BMVA Press},
	editor={Xianghua Xie, Mark W. Jones, and Gary K. L. Tam},
	doi={10.5244/C.29.31},
	isbn={1-901725-53-7},
	url={https://dx.doi.org/10.5244/C.29.31}
}