Skip to main content
Erschienen in:

06.12.2022

A Multilayer Network-Based Approach to Represent, Explore and Handle Convolutional Neural Networks

verfasst von: Alessia Amelio, Gianluca Bonifazi, Enrico Corradini, Domenico Ursino, Luca Virgili

Erschienen in: Cognitive Computation | Ausgabe 1/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep learning techniques and tools have experienced enormous growth and widespread diffusion in recent years. Among the areas where deep learning has become more widespread there are computational biology and cognitive neuroscience. At the same time, the need for tools able to explore, understand, and possibly manipulate, a deep learning model has strongly emerged. We propose an approach to map a deep learning model into a multilayer network. Our approach is tailored to Convolutional Neural Networks (CNN), but can be easily extended to other architectures. In order to show how our mapping approach enables the exploration and management of deep learning networks, we illustrate a technique for compressing a CNN. It detects whether there are convolutional layers that can be pruned without losing too much information and, in the affirmative case, returns a new CNN obtained from the original one by pruning such layers. We prove the effectiveness of the multilayer mapping approach and the corresponding compression algorithm on the VGG16 network and two benchmark datasets, namely MNIST, and CALTECH-101. In the former case, we obtain a 0.56% increase in accuracy, precision, and recall, and a 21.43% decrease in mean epoch time. In the latter case, we obtain an 11.09% increase in accuracy, 22.27% increase in precision, 38.66% increase in recall, and 47.22% decrease in mean epoch time. Finally, we compare our multilayer mapping approach with a similar one based on single layers and show the effectiveness of the former. We show that a multilayer network-based approach is able to capture and represent the complexity of a CNN. Furthermore, it allows several manipulations on it. An extensive experimental analysis described in the paper demonstrates the suitability of our approach and the goodness of its performance.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
An example of aggregated value could be the maximum.
 
2
Here and in the following, we will use the symbols \(\mathcal{I}(i,j)\) and \(\mathcal{O}(i,j)\) to denote both the elements of the feature maps and the corresponding nodes of the class network.
 
Literatur
1.
Zurück zum Zitat Dargan S, Kumar M, Ayyagari MR, Kumar G. A survey of deep learning and its applications: A new paradigm to machine learning. Archives of Computational Methods in Engineering. 2020;27:1071–92.MathSciNetCrossRef Dargan S, Kumar M, Ayyagari MR, Kumar G. A survey of deep learning and its applications: A new paradigm to machine learning. Archives of Computational Methods in Engineering. 2020;27:1071–92.MathSciNetCrossRef
2.
Zurück zum Zitat Merzoug MA, Mostefaoui A, Kechout MH, Tamraoui S. Deep learning for resource-limited devices. In: Proc. of the ACM Symposium on QoS and Security for Wireless and Mobile Networks, New York, NY, USA, pp. 81–87. Association for Computing Machinery; 2020. Merzoug MA, Mostefaoui A, Kechout MH, Tamraoui S. Deep learning for resource-limited devices. In: Proc. of the ACM Symposium on QoS and Security for Wireless and Mobile Networks, New York, NY, USA, pp. 81–87. Association for Computing Machinery; 2020.
3.
Zurück zum Zitat Choudhary T, Mishra V, Goswami A, Sarangapani J. A comprehensive survey on model compression and acceleration. Artif Intell Rev. 2020;1–43. Choudhary T, Mishra V, Goswami A, Sarangapani J. A comprehensive survey on model compression and acceleration. Artif Intell Rev. 2020;1–43.
4.
Zurück zum Zitat Angermueller C, Pärnamaa T, Parts L, Stegle O. Deep learning for computational biology. Mol Syst Biol. 2016;12(7), 878. Angermueller C, Pärnamaa T, Parts L, Stegle O. Deep learning for computational biology. Mol Syst Biol. 2016;12(7), 878.
5.
Zurück zum Zitat Mahmud M, Kaiser MS, McGinnity TM, Hussain A. Deep learning in mining biological data. Cogn Comput. 2021;13(1):1–33.CrossRef Mahmud M, Kaiser MS, McGinnity TM, Hussain A. Deep learning in mining biological data. Cogn Comput. 2021;13(1):1–33.CrossRef
6.
Zurück zum Zitat Chen Y, Zheng B, Zhang Z, Wang Q, Shen C, Zhang Q. Deep learning on mobile and embedded devices: State-of-the-art, challenges, and future directions. ACM 2020;53(4). Association for Computing Machinery Chen Y, Zheng B, Zhang Z, Wang Q, Shen C, Zhang Q. Deep learning on mobile and embedded devices: State-of-the-art, challenges, and future directions. ACM 2020;53(4). Association for Computing Machinery
7.
Zurück zum Zitat Chen Z, Chen Z, Lin J, Liu S, Li W. Deep neural network acceleration based on low-rank approximated channel pruning. IEEE Trans Circuits Syst I Regul Pap. 2020;67(4):1232–44.CrossRef Chen Z, Chen Z, Lin J, Liu S, Li W. Deep neural network acceleration based on low-rank approximated channel pruning. IEEE Trans Circuits Syst I Regul Pap. 2020;67(4):1232–44.CrossRef
8.
Zurück zum Zitat Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. 2017. CoRR abs/1704.04861 arXiv:1704.04861. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. 2017. CoRR abs/1704.04861 arXiv:​1704.​04861.
9.
Zurück zum Zitat Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 MB model size. 2016. arXiv:1602.07360. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 MB model size. 2016. arXiv:​1602.​07360.
10.
Zurück zum Zitat Zhang X, Zhou X, Lin M, Sun J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18), Salt Lake City, Utah, USA; 2018. pp. 6848–6856. IEEE. Zhang X, Zhou X, Lin M, Sun J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18), Salt Lake City, Utah, USA; 2018. pp. 6848–6856. IEEE.
11.
Zurück zum Zitat Mehta S, Rastegari M, Caspi A, Shapiro L, Hajishirzi H. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18), Salt Lake City, Utah, USA; 2018. pp. 6848–6856. IEEE. Mehta S, Rastegari M, Caspi A, Shapiro L, Hajishirzi H. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18), Salt Lake City, Utah, USA; 2018. pp. 6848–6856. IEEE.
14.
Zurück zum Zitat Chen S, Zhao Q. Shallowing deep networks: Layer-wise pruning based on feature representations. IEEE Trans Pattern Anal Mach Intell. 2019;41(12):3048–56.CrossRef Chen S, Zhao Q. Shallowing deep networks: Layer-wise pruning based on feature representations. IEEE Trans Pattern Anal Mach Intell. 2019;41(12):3048–56.CrossRef
16.
Zurück zum Zitat Srinivas S, Babu RV. Data-free parameter pruning for deep neural networks. CoRR abs/1507.06149. 2015. Srinivas S, Babu RV. Data-free parameter pruning for deep neural networks. CoRR abs/1507.06149. 2015.
17.
Zurück zum Zitat Ardakani A, Condo C, Gross WJ. Sparsely-connected neural networks: Towards efficient VLSI implementation of deep neural networks. CoRR abs/1611.01427. 2016. Ardakani A, Condo C, Gross WJ. Sparsely-connected neural networks: Towards efficient VLSI implementation of deep neural networks. CoRR abs/1611.01427. 2016.
18.
Zurück zum Zitat Babaeizadeh M, Smaragdis P, Campbell RH. A simple yet effective method to prune dense layers of neural networks. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR Babaeizadeh M, Smaragdis P, Campbell RH. A simple yet effective method to prune dense layers of neural networks. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR
20.
Zurück zum Zitat Lin M, Chen Q, Yan S. Network In Network. 2014. Lin M, Chen Q, Yan S. Network In Network. 2014.
21.
Zurück zum Zitat Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15); 2015. pp. 1–9. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15); 2015. pp. 1–9.
22.
Zurück zum Zitat Guo Y, Yao A, Chen Y. Dynamic network surgery for efficient dnns. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’16). NIPS’16, pp. 1387–1395. Curran Associates Inc., Red Hook, NY, USA; 2016. Guo Y, Yao A, Chen Y. Dynamic network surgery for efficient dnns. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’16). NIPS’16, pp. 1387–1395. Curran Associates Inc., Red Hook, NY, USA; 2016.
23.
Zurück zum Zitat Li H, Kadav A, Durdanovic I, Samet H, Graf HP. Pruning filters for efficient convnets. CoRR abs/1608.08710. 2016. Li H, Kadav A, Durdanovic I, Samet H, Graf HP. Pruning filters for efficient convnets. CoRR abs/1608.08710. 2016.
24.
Zurück zum Zitat Molchanov P, Tyree S, Karras T, Aila T, Kautz J. Pruning convolutional neural networks for resource efficient inference. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR Molchanov P, Tyree S, Karras T, Aila T, Kautz J. Pruning convolutional neural networks for resource efficient inference. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR
27.
Zurück zum Zitat Zhu M, Gupta S. To prune, or not to prune: exploring the efficacy of pruning for model compression. 2017. Zhu M, Gupta S. To prune, or not to prune: exploring the efficacy of pruning for model compression. 2017.
28.
Zurück zum Zitat Albu F, Mateescu A, Dumitriu N. Architecture selection for a multilayer feedforward network. In: Proc. of International Conference on Microelectronics and Computer Science (ICMCS’97); 1997. pp. 131–134. Albu F, Mateescu A, Dumitriu N. Architecture selection for a multilayer feedforward network. In: Proc. of International Conference on Microelectronics and Computer Science (ICMCS’97); 1997. pp. 131–134.
29.
Zurück zum Zitat Czernichow T, Germond A, Dorizzi B, Caire P. Improving recurrent network load forecasting. In: Proc. of International Conference on Neural Networks (ICNN’95), vol. 2. Perth, WA, Australia; 1995. pp. 899–904. IEEE Czernichow T, Germond A, Dorizzi B, Caire P. Improving recurrent network load forecasting. In: Proc. of International Conference on Neural Networks (ICNN’95), vol. 2. Perth, WA, Australia; 1995. pp. 899–904. IEEE
30.
Zurück zum Zitat Chen W, Wilson JT, Tyree S, Weinberger KQ, Chen Y. Compressing neural networks with the hashing trick. In: Proc. of the International Conference on Machine Learning (ICML’15); 2015. pp. 2285–2294. http://www.JMLR.org, Lille, France. Chen W, Wilson JT, Tyree S, Weinberger KQ, Chen Y. Compressing neural networks with the hashing trick. In: Proc. of the International Conference on Machine Learning (ICML’15); 2015. pp. 2285–2294. http://​www.​JMLR.​org, Lille, France.
31.
Zurück zum Zitat Courbariaux M, Bengio Y, David JP. Binaryconnect: Training deep neural networks with binary weights during propagations. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’15). NIPS’15; 2015. pp. 3123–3131. MIT Press, Cambridge, MA, USA. Courbariaux M, Bengio Y, David JP. Binaryconnect: Training deep neural networks with binary weights during propagations. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’15). NIPS’15; 2015. pp. 3123–3131. MIT Press, Cambridge, MA, USA.
32.
Zurück zum Zitat Lin Z, Courbariaux M, Memisevic R, Bengio Y. Neural networks with few multiplications. In: Bengio, Y., LeCun, Y. (eds.) Proc. of the International Conference on Learning Representations (ICLR’16), San Juan, Puerto Rico; 2016. Lin Z, Courbariaux M, Memisevic R, Bengio Y. Neural networks with few multiplications. In: Bengio, Y., LeCun, Y. (eds.) Proc. of the International Conference on Learning Representations (ICLR’16), San Juan, Puerto Rico; 2016.
33.
Zurück zum Zitat Hubara I, Courbariaux M, Soudry D, El-Yaniv R, Bengio Y. Binarized neural networks. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’16), Red Hook, NY, USA; 2016. pp. 4114–4122. Curran Associates Inc. Hubara I, Courbariaux M, Soudry D, El-Yaniv R, Bengio Y. Binarized neural networks. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’16), Red Hook, NY, USA; 2016. pp. 4114–4122. Curran Associates Inc.
34.
Zurück zum Zitat Hou L, Yao Q, Kwok JT. Loss-aware binarization of deep networks. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR Hou L, Yao Q, Kwok JT. Loss-aware binarization of deep networks. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR
35.
Zurück zum Zitat Hou L, Kwok JT. Loss-aware weight quantization of deep networks. In: Proc. of the International Conference on Learning Representations (ICLR’18). ICLR, Vancouver, BC, Canada; 2018. Hou L, Kwok JT. Loss-aware weight quantization of deep networks. In: Proc. of the International Conference on Learning Representations (ICLR’18). ICLR, Vancouver, BC, Canada; 2018.
36.
Zurück zum Zitat Zhou S, Ni Z, Zhou X, Wen H, Wu Y, Zou Y. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR abs/1606.06160. 2016. arXiv:1606.06160. Zhou S, Ni Z, Zhou X, Wen H, Wu Y, Zou Y. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR abs/1606.06160. 2016. arXiv:​1606.​06160.
37.
Zurück zum Zitat Lin JH, Xing T, Zhao R, Zhang Z, Srivastava M, Tu Z, Gupta RK. Binarized convolutional neural networks with separable filters for efficient hardware acceleration. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2017. pp. 344–352. https://doi.org/10.1109/CVPRW.2017.48. Lin JH, Xing T, Zhao R, Zhang Z, Srivastava M, Tu Z, Gupta RK. Binarized convolutional neural networks with separable filters for efficient hardware acceleration. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2017. pp. 344–352. https://​doi.​org/​10.​1109/​CVPRW.​2017.​48.
38.
Zurück zum Zitat Han S, Mao H, Dally WJ. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. 2016. Han S, Mao H, Dally WJ. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. 2016.
40.
Zurück zum Zitat Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. 2015. Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. 2015.
41.
Zurück zum Zitat Romero A, Ballas N, Kahou SE, Chassang A, Gatta C, Bengio Y. Fitnets: Hints for thin deep nets. In: Proc. of the International Conference on Learning Representations (ICLR’15). 2015. Romero A, Ballas N, Kahou SE, Chassang A, Gatta C, Bengio Y. Fitnets: Hints for thin deep nets. In: Proc. of the International Conference on Learning Representations (ICLR’15). 2015.
42.
Zurück zum Zitat Kim J, Park S, Kwak N. Paraphrasing complex network: Network compression via factor transfer. In: Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc.; 2018. Kim J, Park S, Kwak N. Paraphrasing complex network: Network compression via factor transfer. In: Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc.; 2018.
43.
Zurück zum Zitat Srinivas S, Fleuret F. Knowledge transfer with Jacobian matching. In: Proc. of the International Conference on Machine Learning (ICLR’18); 2018. vol. 80, pp. 4723–4731. PMLR Srinivas S, Fleuret F. Knowledge transfer with Jacobian matching. In: Proc. of the International Conference on Machine Learning (ICLR’18); 2018. vol. 80, pp. 4723–4731. PMLR
44.
Zurück zum Zitat Polino A, Pascanu R, Alistarh D. Model compression via distillation and quantization. In: Proc. of the International Conference on Learning Representations (ICLR’18). ICLR, Vancouver, BC, Canada; 2018. Polino A, Pascanu R, Alistarh D. Model compression via distillation and quantization. In: Proc. of the International Conference on Learning Representations (ICLR’18). ICLR, Vancouver, BC, Canada; 2018.
45.
Zurück zum Zitat Lan X, Zhu X, Gong S. Knowledge distillation by on-the-fly native ensemble. In: Advances in Neural Information Processing Systems; 2018. vol. 31. Curran Associates, Inc. Lan X, Zhu X, Gong S. Knowledge distillation by on-the-fly native ensemble. In: Advances in Neural Information Processing Systems; 2018. vol. 31. Curran Associates, Inc.
46.
Zurück zum Zitat You J, Leskovec J, He K, Xie S. Graph structure of neural networks. In: Proc. of the International Conference on Machine Learning (ICML’20); 2020. vol. 119, pp. 10881–10891. PMLR You J, Leskovec J, He K, Xie S. Graph structure of neural networks. In: Proc. of the International Conference on Machine Learning (ICML’20); 2020. vol. 119, pp. 10881–10891. PMLR
47.
Zurück zum Zitat Altas D, Cilingirturk AM, Gulpinar V. Analyzing the process of the artificial neural networks by the help of the social network analysis. New Knowledge Journal of Science. 2013;2:80–91. Altas D, Cilingirturk AM, Gulpinar V. Analyzing the process of the artificial neural networks by the help of the social network analysis. New Knowledge Journal of Science. 2013;2:80–91.
48.
Zurück zum Zitat Sainath TN, Kingsbury B, Sindhwani V, Arisoy E, Ramabhadran B. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing; 2013. pp. 6655–6659. https://doi.org/10.1109/ICASSP.2013.6638949. Sainath TN, Kingsbury B, Sindhwani V, Arisoy E, Ramabhadran B. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing; 2013. pp. 6655–6659. https://​doi.​org/​10.​1109/​ICASSP.​2013.​6638949.
49.
Zurück zum Zitat Denton EL, Zaremba W, Bruna J, LeCun Y, Fergus R. Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in Neural Information Processing Systems; 2014. vol. 27. Curran Associates, Inc. Denton EL, Zaremba W, Bruna J, LeCun Y, Fergus R. Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in Neural Information Processing Systems; 2014. vol. 27. Curran Associates, Inc.
50.
Zurück zum Zitat Jaderberg M, Vedaldi A, Zisserman A. Speeding up convolutional neural networks with low rank expansions. In: Proc. of British Machine Vision Conference (BMVC’14); 2014. BMVA Press Jaderberg M, Vedaldi A, Zisserman A. Speeding up convolutional neural networks with low rank expansions. In: Proc. of British Machine Vision Conference (BMVC’14); 2014. BMVA Press
51.
Zurück zum Zitat Kim Y, Park E, Yoo S, Choi T, Yang L, Shin D. Compression of deep convolutional neural networks for fast and low power mobile applications. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. 2016. Kim Y, Park E, Yoo S, Choi T, Yang L, Shin D. Compression of deep convolutional neural networks for fast and low power mobile applications. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. 2016.
52.
Zurück zum Zitat Ioannou Y, Robertson DP, Shotton J, Cipolla R, Criminisi A. Training cnns with low-rank filters for efficient image classification. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. 2016. Ioannou Y, Robertson DP, Shotton J, Cipolla R, Criminisi A. Training cnns with low-rank filters for efficient image classification. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. 2016.
53.
Zurück zum Zitat Alvarez JM, Salzmann M. Compression-aware training of deep networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS’17, pp. 856–867. Curran Associates Inc., Red Hook, NY, USA; 2017. Alvarez JM, Salzmann M. Compression-aware training of deep networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS’17, pp. 856–867. Curran Associates Inc., Red Hook, NY, USA; 2017.
55.
Zurück zum Zitat Li C, Shi CR. Constrained optimization based low-rank approximation of deep neural networks. In: Proc. of the European Conference Computer (ECCV’18); 2018. vol. 11214, pp. 746–761. Springer, Munich, Germany. Li C, Shi CR. Constrained optimization based low-rank approximation of deep neural networks. In: Proc. of the European Conference Computer (ECCV’18); 2018. vol. 11214, pp. 746–761. Springer, Munich, Germany.
56.
Zurück zum Zitat Yao K, Cao F, Leung Y, Liang J. Deep Neural Network Compression through Interpretability-Based Filter Pruning. Pattern Recognition, 108056; 2021. Elsevier Yao K, Cao F, Leung Y, Liang J. Deep Neural Network Compression through Interpretability-Based Filter Pruning. Pattern Recognition, 108056; 2021. Elsevier
59.
Zurück zum Zitat Zhang Q, Cao R, Shi F, Wu YN, Zhu SC. Interpreting cnn knowledge via an explanatory graph. Proceedings of the AAAI Conference on Artificial Intelligence. 2018;32(1). Zhang Q, Cao R, Shi F, Wu YN, Zhu SC. Interpreting cnn knowledge via an explanatory graph. Proceedings of the AAAI Conference on Artificial Intelligence. 2018;32(1).
61.
Zurück zum Zitat Zhang Q, Cao R, Wu YN, Zhu SC. Mining object parts from cnns via active question-answering. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17), Los Alamitos, CA, USA; 2017. pp. 3890–3899. IEEE Zhang Q, Cao R, Wu YN, Zhu SC. Mining object parts from cnns via active question-answering. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17), Los Alamitos, CA, USA; 2017. pp. 3890–3899. IEEE
62.
Zurück zum Zitat Manel Hmimida RK. Community detection in multiplex networks: A seed-centric approach. Networks & Heterogeneous Media. 2015;10(1):71–85.MathSciNetCrossRefMATH Manel Hmimida RK. Community detection in multiplex networks: A seed-centric approach. Networks & Heterogeneous Media. 2015;10(1):71–85.MathSciNetCrossRefMATH
65.
Zurück zum Zitat Gowdra N, Sinha R, MacDonell S, Yan WQ. Mitigating severe over-parameterization in deep convolutional neural networks through forced feature abstraction and compression with an entropy-based heuristic. Pattern Recogn. 2021;108057. Elsevier Gowdra N, Sinha R, MacDonell S, Yan WQ. Mitigating severe over-parameterization in deep convolutional neural networks through forced feature abstraction and compression with an entropy-based heuristic. Pattern Recogn. 2021;108057. Elsevier
66.
Zurück zum Zitat Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Bengio, Y., LeCun, Y. (eds.) Proc. of the International Conference on Learning Representations (ICLR’15); 2015. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Bengio, Y., LeCun, Y. (eds.) Proc. of the International Conference on Learning Representations (ICLR’15); 2015.
70.
Zurück zum Zitat Han S, Pool J, Tran J, Dally WJ. Learning both weights and connections for efficient neural networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1. NIPS’15, pp. 1135–1143. MIT Press, Cambridge, MA, USA; 2015. Han S, Pool J, Tran J, Dally WJ. Learning both weights and connections for efficient neural networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1. NIPS’15, pp. 1135–1143. MIT Press, Cambridge, MA, USA; 2015.
Metadaten
Titel
A Multilayer Network-Based Approach to Represent, Explore and Handle Convolutional Neural Networks
verfasst von
Alessia Amelio
Gianluca Bonifazi
Enrico Corradini
Domenico Ursino
Luca Virgili
Publikationsdatum
06.12.2022
Verlag
Springer US
Erschienen in
Cognitive Computation / Ausgabe 1/2023
Print ISSN: 1866-9956
Elektronische ISSN: 1866-9964
DOI
https://doi.org/10.1007/s12559-022-10084-6