Skip to main content

2021 | OriginalPaper | Buchkapitel

Remove to Improve?

verfasst von : Kamila Abdiyeva, Martin Lukac, Narendra Ahuja

Erschienen in: Pattern Recognition. ICPR International Workshops and Challenges

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The workhorses of CNNs are its filters, located at different layers and tuned to different features. Their responses are combined using weights obtained via network training. Training is aimed at optimal results for the entire training data, e.g., highest average classification accuracy. In this paper, we are interested in extending the current understanding of the roles played by the filters, their mutual interactions, and their relationship to classification accuracy. This is motivated by observations that the classification accuracy for some classes increases, instead of decreasing when some filters are pruned from a CNN. We are interested in experimentally addressing the following question: Under what conditions does filter pruning increase classification accuracy? We show that improvement of classification accuracy occurs for certain classes. These classes are placed during learning into a space (spanned by filter usage) populated with semantically related neighbors. The neighborhood structure of such classes is however sparse enough so that during pruning, the resulting compression bringing all classes together brings sample data closer together and thus increases the accuracy of classification.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Carter, S., Armstrong, Z., Schubert, L., Johnson, I., Olah, C.: Activation atlas. Distill (2019) Carter, S., Armstrong, Z., Schubert, L., Johnson, I., Olah, C.: Activation atlas. Distill (2019)
4.
Zurück zum Zitat Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and \(<\)0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016) Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and \(<\)0.5 mb model size. arXiv preprint arXiv:​1602.​07360 (2016)
5.
Zurück zum Zitat Kindermans, P.J., et al.: Learning how to explain neural networks. ArXiv e-prints (2017) Kindermans, P.J., et al.: Learning how to explain neural networks. ArXiv e-prints (2017)
6.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012) Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)
8.
Zurück zum Zitat Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: ICCV, pp. 5058–5066 (2017) Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: ICCV, pp. 5058–5066 (2017)
9.
Zurück zum Zitat Ma, X., Yuan, G., Lin, S., Li, Z., Sun, H., Wang, Y.: ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal (P-RM) after Weight Pruning, April 2019 Ma, X., Yuan, G., Lin, S., Li, Z., Sun, H., Wang, Y.: ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal (P-RM) after Weight Pruning, April 2019
10.
Zurück zum Zitat van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)MATH van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)MATH
11.
Zurück zum Zitat Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient transfer learning (2016) Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient transfer learning (2016)
12.
Zurück zum Zitat Morcos, A.S., Barrett, D.G.T., Rabinowitz, N.C., Botvinick, M.: On the importance of single directions for generalization arXiv:1803.06959, March 2018 Morcos, A.S., Barrett, D.G.T., Rabinowitz, N.C., Botvinick, M.: On the importance of single directions for generalization arXiv:​1803.​06959, March 2018
14.
Zurück zum Zitat Olah, C., et al.: The building blocks of interpretability. Distill (2018) Olah, C., et al.: The building blocks of interpretability. Distill (2018)
16.
Zurück zum Zitat Raghu, M., Gilmer, J., Yosinski, J., Sohl-Dickstein, J.: SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability (2017) Raghu, M., Gilmer, J., Yosinski, J., Sohl-Dickstein, J.: SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability (2017)
18.
Zurück zum Zitat Suzuki, T., et al.: Spectral-pruning: compressing deep neural network via spectral analysis (2018) Suzuki, T., et al.: Spectral-pruning: compressing deep neural network via spectral analysis (2018)
19.
Zurück zum Zitat Wu, Z., Palmer, M.: Verbs semantics and lexical selection, pp. 133–138, January 1994 Wu, Z., Palmer, M.: Verbs semantics and lexical selection, pp. 133–138, January 1994
20.
Zurück zum Zitat Yosinski, J., Clune, J., Nguyen, A.M., Fuchs, T.J., Lipson, H.: Understanding neural networks through deep visualization (2015) Yosinski, J., Clune, J., Nguyen, A.M., Fuchs, T.J., Lipson, H.: Understanding neural networks through deep visualization (2015)
21.
Zurück zum Zitat Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting deep visual representations via network dissection. TPAMI 41(9), 2131–2145 (2019)CrossRef Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting deep visual representations via network dissection. TPAMI 41(9), 2131–2145 (2019)CrossRef
Metadaten
Titel
Remove to Improve?
verfasst von
Kamila Abdiyeva
Martin Lukac
Narendra Ahuja
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-68796-0_11