Skip to main content
Top

2018 | OriginalPaper | Chapter

ConvNets Pruning by Feature Maps Selection

Authors : Junhua Zou, Ting Rui, You Zhou, Chengsong Yang, Sai Zhang

Published in: Artificial Intelligence and Robotics

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Convolutional neural network (CNN) is one of the research focuses in machine learning in the last few years. But as the continuous development of CNN in vision and speech, the number of parameters is also increasing, too. CNN, which has millions of parameters, makes the memory of the model very large, and this impedes its widespread especially in mobile device. Based on the observation above, we not only design a CNN pruning method, where we prune unimportant feature maps, but also propose a separability values based number confirmation method which can relatively determine the appropriate pruning number. Experimental results show that, in the cifar-10 dataset, feature maps in each convolutional layer can be pruned by at least 15.6%, up to 59.7%, and the pruning process will not cause any performance loss. We also proved that the confirmation method is effective by a large number of repeated experiments which gradually prune feature maps of each convolutional layer.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef
2.
go back to reference Lu, H., Li, Y., Chen, M., et al.: Brain intelligence: go beyond artificial intelligence (2017) Lu, H., Li, Y., Chen, M., et al.: Brain intelligence: go beyond artificial intelligence (2017)
3.
go back to reference Zeiler, M.D., Fergus, R.: Visual. Understand. Convolution. Netw. 8689:818–833 (2014) Zeiler, M.D., Fergus, R.: Visual. Understand. Convolution. Netw. 8689:818–833 (2014)
4.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems. Curran Associates Inc., 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems. Curran Associates Inc., 1097–1105 (2012)
5.
go back to reference Sermanet, P., Eigen, D., Zhang, X., et al.: OverFeat: integrated recognition, localization and detection using convolutional networks. Eprint Arxiv (2013) Sermanet, P., Eigen, D., Zhang, X., et al.: OverFeat: integrated recognition, localization and detection using convolutional networks. Eprint Arxiv (2013)
6.
go back to reference Chatfield, K., Simonyan, K., Vedaldi, A., et al.: Return of the devil in the details: delving deep into convolutional nets. Comput. Sci. (2014) Chatfield, K., Simonyan, K., Vedaldi, A., et al.: Return of the devil in the details: delving deep into convolutional nets. Comput. Sci. (2014)
7.
go back to reference Li, Y., Lu, H., Li, J., et al.: Underwater image de-scattering and classification by deep neural network. Comput. Electr. Eng. 54, 68–77 (2016)CrossRef Li, Y., Lu, H., Li, J., et al.: Underwater image de-scattering and classification by deep neural network. Comput. Electr. Eng. 54, 68–77 (2016)CrossRef
8.
go back to reference Xu, W., Xu, W., Yang, M., et al.: 3D Convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)MathSciNet Xu, W., Xu, W., Yang, M., et al.: 3D Convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)MathSciNet
9.
go back to reference Zou, W.Y., Wang, X., Sun, M., et al.: Generic object detection with dense neural patterns and regionlets. Eprint Arxiv (2014) Zou, W.Y., Wang, X., Sun, M., et al.: Generic object detection with dense neural patterns and regionlets. Eprint Arxiv (2014)
10.
go back to reference Liu, J., Ren, T., Wang, Y., et al.: Object proposal on RGB-D images via elastic edge boxes. Neurocomputing (2016) Liu, J., Ren, T., Wang, Y., et al.: Object proposal on RGB-D images via elastic edge boxes. Neurocomputing (2016)
11.
go back to reference Guo, J, Ren, T., Bei, J., et al.: Salient object detection in RGB-D image based on saliency fusion and propagation. In: International Conference on Internet Multimedia Computing and Service. ACM, 59 (2015) Guo, J, Ren, T., Bei, J., et al.: Salient object detection in RGB-D image based on saliency fusion and propagation. In: International Conference on Internet Multimedia Computing and Service. ACM, 59 (2015)
12.
go back to reference Ren, T., Liu, Y., Ju, R., et al.: How important is location information in saliency detection of natural images. Multimed. Tools Appl. 75(5), 2543–2564 (2016)CrossRef Ren, T., Liu, Y., Ju, R., et al.: How important is location information in saliency detection of natural images. Multimed. Tools Appl. 75(5), 2543–2564 (2016)CrossRef
13.
go back to reference Ren, T., Qiu, Z., Liu, Y., et al.: Soft-assigned bag of features for object tracking. Multimed. Syst. 21(2), 189–205 (2015)CrossRef Ren, T., Qiu, Z., Liu, Y., et al.: Soft-assigned bag of features for object tracking. Multimed. Syst. 21(2), 189–205 (2015)CrossRef
14.
go back to reference Razavian, A.S., Azizpour, H., Sullivan, J., et al.: CNN features off-the-shelf: an astounding baseline for recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society. 512–519 (2014) Razavian, A.S., Azizpour, H., Sullivan, J., et al.: CNN features off-the-shelf: an astounding baseline for recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society. 512–519 (2014)
15.
go back to reference Taigman, Y., Yang, M., Marc, et al.: DeepFace: closing the gap to human-level performance in face verification. Comput. Vis. Patt. Recogn. IEEE, 1701–1708 (2014) Taigman, Y., Yang, M., Marc, et al.: DeepFace: closing the gap to human-level performance in face verification. Comput. Vis. Patt. Recogn. IEEE, 1701–1708 (2014)
16.
go back to reference Lu, H., Li, B., Zhu, J., et al.: Wound intensity correction and segmentation with convolutional neural networks. Concurr. Computat. Pract. Exper. (2016) Lu, H., Li, B., Zhu, J., et al.: Wound intensity correction and segmentation with convolutional neural networks. Concurr. Computat. Pract. Exper. (2016)
17.
go back to reference Lu, H., Li, Y., Nakashima, S., et al.: Single image dehazing through improved atmospheric light estimation. Multimed. Tools Appl. 75(24), 17081–17096 (2016)CrossRef Lu, H., Li, Y., Nakashima, S., et al.: Single image dehazing through improved atmospheric light estimation. Multimed. Tools Appl. 75(24), 17081–17096 (2016)CrossRef
18.
go back to reference Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. Fiber 56(4), 3–7 (2015) Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. Fiber 56(4), 3–7 (2015)
19.
go back to reference Courbariaux, M., Bengio, Y.: BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or −1 (2016) Courbariaux, M., Bengio, Y.: BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or −1 (2016)
20.
go back to reference Sun, Y., Wang, X., Tang, X.: Sparsifying neural network connections for face recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 4856–4864 (2016) Sun, Y., Wang, X., Tang, X.: Sparsifying neural network connections for face recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 4856–4864 (2016)
21.
go back to reference Li, H., Kadav, A., Durdanovic, I., et al.: Pruning Filters for Efficient ConvNets (2017) Li, H., Kadav, A., Durdanovic, I., et al.: Pruning Filters for Efficient ConvNets (2017)
22.
go back to reference Rui, T., Zou, J., Zhou, Y., et al.: Pedestrian detection based on multi-convolutional features by feature maps pruning. Multimed. Tools Appl. 1–11 (2017) Rui, T., Zou, J., Zhou, Y., et al.: Pedestrian detection based on multi-convolutional features by feature maps pruning. Multimed. Tools Appl. 1–11 (2017)
23.
go back to reference Rui, T., Zou, J., Zhou, Y., et al.: Convolutional neural network feature maps selection based on LDA. Multimed. Tools Appl. 1–15 (2017) Rui, T., Zou, J., Zhou, Y., et al.: Convolutional neural network feature maps selection based on LDA. Multimed. Tools Appl. 1–15 (2017)
Metadata
Title
ConvNets Pruning by Feature Maps Selection
Authors
Junhua Zou
Ting Rui
You Zhou
Chengsong Yang
Sai Zhang
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-69877-9_27

Premium Partner