Skip to main content
Top
Published in: Cognitive Neurodynamics 6/2023

14-11-2022 | Research Article

DRF-DRC: dynamic receptive field and dense residual connections for model compression

Authors: Wei Wang, Yongde Zhang, Liqiang Zhu

Published in: Cognitive Neurodynamics | Issue 6/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Deep convolutional neural networks have achived remarkable progress on computer vision tasks over last years. These novel neural architecture are most designed manually by human experts, which is a time-consuming process and not the best solution. Hence neural architecture search (NAS) has become a hot research topic for the design of neural architecture. In this paper, we propose the dynamic receptive field (DRF) operation and measurable dense residual connections (DRC) in search space for designing efficient networks, i.e., DRENet. The search method can be deployed on the MobileNetV2-based search space. The experimental results on CIFAR10/100, SVHN, CUB-200-2011, ImageNet and COCO benchmark datasets and an application example in a railway intelligent surveillance system demonstrate the effectiveness of our scheme, which achieves superior performance.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Baker B, Gupta O, Naik N, Raskar R (2017) “Designing Neural Network Architectures using Reinforcement Learning”. In: 5th international conference on learning representations. Baker B, Gupta O, Naik N, Raskar R (2017) “Designing Neural Network Architectures using Reinforcement Learning”. In: 5th international conference on learning representations.
go back to reference Bender G, Kindermans PJ, Zoph, Vasudevan B, V. and Le, Q (2018) Understanding and simplifying one-shot architecture search. In international conference on machine learning, July, pp. 550–559. Bender G, Kindermans PJ, Zoph, Vasudevan B, V. and Le, Q (2018) Understanding and simplifying one-shot architecture search. In international conference on machine learning, July, pp. 550–559.
go back to reference Brock A, Lim T, Ritchie JM. and Weston N (2017) SMASH: One-shot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344 Brock A, Lim T, Ritchie JM. and Weston N (2017) SMASH: One-shot model architecture search through hypernetworks. arXiv preprint arXiv:​1708.​05344
go back to reference Cai H, Zhu L, Han S (2019) Proxylessnas: Direct neural architecture search on target task and hardware (ICLR 2019). arXiv preprint arXiv:1812.00332. Cai H, Zhu L, Han S (2019) Proxylessnas: Direct neural architecture search on target task and hardware (ICLR 2019). arXiv preprint arXiv:​1812.​00332.
go back to reference Cai K, Miao X, Wang WH. Pang Y Liu and J. Song (2020) A modified YOLOv3 model for fish detection based on mobilenetv1 as backbone. aquacultural engineering 91: 102117. Cai K, Miao X, Wang WH. Pang Y Liu and J. Song (2020) A modified YOLOv3 model for fish detection based on mobilenetv1 as backbone. aquacultural engineering 91: 102117.
go back to reference Chen X. and Hsieh, Ch (2020) Stabilizing differentiable architecture search via perturbation-based regularization. In ICML Chen X. and Hsieh, Ch (2020) Stabilizing differentiable architecture search via perturbation-based regularization. In ICML
go back to reference Chen LC, Collins M, Zhu Y. et al. H. Adam and J. Shlens (2018) Searching for efficient multi-scale architectures for dense image prediction. In Advances in neural information processing systems, pages 8713–8724,. Chen LC, Collins M, Zhu Y. et al. H. Adam and J. Shlens (2018) Searching for efficient multi-scale architectures for dense image prediction. In Advances in neural information processing systems, pages 8713–8724,.
go back to reference Chu X, Wang X, Zhang B, Lu S, Wei, X. and Yan J (2021) DARTS-: Robustly stepping out of performance collapse without indicators. In international conference on learning representations Chu X, Wang X, Zhang B, Lu S, Wei, X. and Yan J (2021) DARTS-: Robustly stepping out of performance collapse without indicators. In international conference on learning representations
go back to reference Courbariaux M, Hubara I, Soudry D. et al (2016) Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830 Courbariaux M, Hubara I, Soudry D. et al (2016) Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:​1602.​02830
go back to reference Deng J, Dong W, Socher R. et al (2009) Imagenet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 248–255 Deng J, Dong W, Socher R. et al (2009) Imagenet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 248–255
go back to reference Denton EL, Zaremba W, Bruna J. and LeCun YFergus R (2014) Exploiting linear structure within convolutional networks for efcient evaluation. In: advances in neural information processing systems, pp 1269–1277. Denton EL, Zaremba W, Bruna J. and LeCun YFergus R (2014) Exploiting linear structure within convolutional networks for efcient evaluation. In: advances in neural information processing systems, pp 1269–1277.
go back to reference Dong X. and Yang Y (2019a) Searching for a robust neural architecture in four gpu hours. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1761–1770 Dong X. and Yang Y (2019a) Searching for a robust neural architecture in four gpu hours. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1761–1770
go back to reference Elsken T, Metzen JH, Hutter F (2019) Neural Architecture Search: A Survey. J Mach Learn Res 20:1–21 Elsken T, Metzen JH, Hutter F (2019) Neural Architecture Search: A Survey. J Mach Learn Res 20:1–21
go back to reference Everingham M, Van Gool L, Williams CKI et al (2010) The Pascal Visual Object Classes (VOC) Challenge. Int J Comput Vision 88(2):303–338CrossRef Everingham M, Van Gool L, Williams CKI et al (2010) The Pascal Visual Object Classes (VOC) Challenge. Int J Comput Vision 88(2):303–338CrossRef
go back to reference Ha D, Dai A, Le Q V (2017) Hypernetworks. In International conference on learning representations (ICLR) Ha D, Dai A, Le Q V (2017) Hypernetworks. In International conference on learning representations (ICLR)
go back to reference Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural network. Adv Neural Inf Process Syst 28:1135–1143 Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural network. Adv Neural Inf Process Syst 28:1135–1143
go back to reference Han S, Mao H, Dally WJ (2015) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 Han S, Mao H, Dally WJ (2015) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:​1510.​00149
go back to reference Hassibi B, Stork DG (1992) Second order derivatives for network pruning: Optimal Brain Surgeon, In: advances in neural information processing systems., pp. 164–171. Hassibi B, Stork DG (1992) Second order derivatives for network pruning: Optimal Brain Surgeon, In: advances in neural information processing systems., pp. 164–171.
go back to reference He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition, In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 770–778. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition, In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 770–778.
go back to reference He Y, LinJ, Liu Z, Wang H, Li LJ, Han, S (2018) AMC: AutoML for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV), pp. 784–800. He Y, LinJ, Liu Z, Wang H, Li LJ, Han, S (2018) AMC: AutoML for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV), pp. 784–800.
go back to reference Hinton G, Vinyals O, Dean J (2014) Distilling the knowledge in a neural network, In: NIPS Workshop Hinton G, Vinyals O, Dean J (2014) Distilling the knowledge in a neural network, In: NIPS Workshop
go back to reference Howard AG, Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications, CoRR vol. abs/1704.04861 Howard AG, Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications, CoRR vol. abs/1704.04861
go back to reference Hu H, Peng R, Tai YW, Tang CK (2016) Network trimming: a data-driven neuron pruning approach towards efficient deep architectures, arXiv preprint arXiv:1607.03250. Hu H, Peng R, Tai YW, Tang CK (2016) Network trimming: a data-driven neuron pruning approach towards efficient deep architectures, arXiv preprint arXiv:​1607.​03250.
go back to reference Hu J, Shen L, and Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition.: 7132–7141. Hu J, Shen L, and Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition.: 7132–7141.
go back to reference Huang G, Liu Z, Maaten L vd., Weinberger K.Q (2017) Densely Connected Convolutional Networks, In: IEEE conference on computer vision and pattern recognition (CVPR) pp. 4700–4708. Huang G, Liu Z, Maaten L vd., Weinberger K.Q (2017) Densely Connected Convolutional Networks, In: IEEE conference on computer vision and pattern recognition (CVPR) pp. 4700–4708.
go back to reference Jaderberg M, Vedaldi A, Zisserman A (2014) Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866 Jaderberg M, Vedaldi A, Zisserman A (2014) Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:​1405.​3866
go back to reference Jin H, Song, Q, Hu X. (2018) Auto-keras: Efficient neural architecture search with network morphism,. Jin H, Song, Q, Hu X. (2018) Auto-keras: Efficient neural architecture search with network morphism,.
go back to reference Krizhevsky A., Hinton G (2009) Learning multiple layers of features from tiny images, in: Tech Report, Krizhevsky A., Hinton G (2009) Learning multiple layers of features from tiny images, in: Tech Report,
go back to reference LeCun Y, Denker JS, Solla SA (1989) Optimal brain damage. In: advances in neural information processing systems, pp. 598–605. LeCun Y, Denker JS, Solla SA (1989) Optimal brain damage. In: advances in neural information processing systems, pp. 598–605.
go back to reference Li H, Kadav A, Durdanovic I, Samet H, Graf HP (2017) Pruning filters for efficient convnets, In: international conference on learning representations (ICLR) arXiv preprint arXiv:1608.08710. Li H, Kadav A, Durdanovic I, Samet H, Graf HP (2017) Pruning filters for efficient convnets, In: international conference on learning representations (ICLR) arXiv preprint arXiv:​1608.​08710.
go back to reference LiY, Lin S, Liu J, Ye Q, Wang M, Chao,F., Yang, F., Ma, J., Tian, Q. and Ji, R (2021). Towards Compact CNNs via Collaborative Compression, In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 6438–6447. LiY, Lin S, Liu J, Ye Q, Wang M, Chao,F., Yang, F., Ma, J., Tian, Q. and Ji, R (2021). Towards Compact CNNs via Collaborative Compression, In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 6438–6447.
go back to reference Lin T, Maire M, Belongie S, Hays J, Perona, P, Ramanan, D, Dollar, Pi. and Zitnick, C L. (2014) Microsoft coco: Common objects in context. In ECCV Lin T, Maire M, Belongie S, Hays J, Perona, P, Ramanan, D, Dollar, Pi. and Zitnick, C L. (2014) Microsoft coco: Common objects in context. In ECCV
go back to reference Liu W, Anguelov D, Erhan D. et al (2016) SSD: Single Shot MultiBox Detector; European conference on computer vision. Springer, Cham, 21–37. Liu W, Anguelov D, Erhan D. et al (2016) SSD: Single Shot MultiBox Detector; European conference on computer vision. Springer, Cham, 21–37.
go back to reference Liu Z, Li J, Shen Z, Huang Yan, G, S. and Zhang, C (2017) Learning efficient convolutional networks through network slimming, In: Proceedings of the IEEE international conference on computer vision, , pp. 2755–2763. Liu Z, Li J, Shen Z, Huang Yan, G, S. and Zhang, C (2017) Learning efficient convolutional networks through network slimming, In: Proceedings of the IEEE international conference on computer vision, , pp. 2755–2763.
go back to reference Liu H, Simonyan K, Yang Y (2019a) DARTS: Differentiable architecture search. In international conference on learning representations (ICLR), arXiv:1806.09055. Liu H, Simonyan K, Yang Y (2019a) DARTS: Differentiable architecture search. In international conference on learning representations (ICLR), arXiv:​1806.​09055.
go back to reference Liu Z, MuH, Zhang, X, Guo Z, Yang X, Cheng KT. and Sun J 2019b Metapruning: Meta learning for automatic neural network channel pruning. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 3296–3305. Liu Z, MuH, Zhang, X, Guo Z, Yang X, Cheng KT. and Sun J 2019b Metapruning: Meta learning for automatic neural network channel pruning. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 3296–3305.
go back to reference Liu J, Wang X (2020) Early recognition of tomato gray leaf spot disease based on MobileNetv2-YOLOv3 model. Plant Methods 16(1):1–16CrossRef Liu J, Wang X (2020) Early recognition of tomato gray leaf spot disease based on MobileNetv2-YOLOv3 model. Plant Methods 16(1):1–16CrossRef
go back to reference Ma N, Zhang X, Zheng HT, Sun J (2018) ShuffleNet V2: practical guidelines for efficient cnn architecture design. arXiv preprint arXiv:1807.11164v1 Ma N, Zhang X, Zheng HT, Sun J (2018) ShuffleNet V2: practical guidelines for efficient cnn architecture design. arXiv preprint arXiv:​1807.​11164v1
go back to reference Miller GF., Todd PM., Hegde SU (1989) Designing neural networks using genetic algorithms. ICGA. 89 Miller GF., Todd PM., Hegde SU (1989) Designing neural networks using genetic algorithms. ICGA. 89
go back to reference Mirzadeh SI, Farajtabar M, Li A et al (2020) Improved knowledge distillation via teacher assistant, Proceedings of the AAAI Conference on. Artif Intell 34(04):5191–5198 Mirzadeh SI, Farajtabar M, Li A et al (2020) Improved knowledge distillation via teacher assistant, Proceedings of the AAAI Conference on. Artif Intell 34(04):5191–5198
go back to reference Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY B (2011) Reading digits in natural images with unsupervised feature learning, in: NIPS workshop on deep learning and unsupervised feature learning Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY B (2011) Reading digits in natural images with unsupervised feature learning, in: NIPS workshop on deep learning and unsupervised feature learning
go back to reference Pham H., Guan MY, Zoph B, Le QV. and Dean J (2018) Faster discovery of neural architectures by searching for paths in a large model. International conference on learning representations Pham H., Guan MY, Zoph B, Le QV. and Dean J (2018) Faster discovery of neural architectures by searching for paths in a large model. International conference on learning representations
go back to reference Phan AH, Sobolev K, Sozykin K. et al (2020) Stable low-rank tensor decomposition for compression of convolutional neural network[C]//European Conference on Computer Vision. Springer, Cham,: 522–539. Phan AH, Sobolev K, Sozykin K. et al (2020) Stable low-rank tensor decomposition for compression of convolutional neural network[C]//European Conference on Computer Vision. Springer, Cham,: 522–539.
go back to reference Real E, Aggarwal A, Huang Y and Le, QV (2019) Regularized evolution for image classifier architecture search. In AAAI conference on artificial intelligence (AAAI), pages 4780–4789 Real E, Aggarwal A, Huang Y and Le, QV (2019) Regularized evolution for image classifier architecture search. In AAAI conference on artificial intelligence (AAAI), pages 4780–4789
go back to reference Redmon J. and Farhadi A (2018) YOLOv3: An incremental improvement. arXiv preprint arXiv: 1804.02767 Redmon J. and Farhadi A (2018) YOLOv3: An incremental improvement. arXiv preprint arXiv: 1804.02767
go back to reference Redmon, J., Divvala, S., Girshick, R. et al. You Only Look Once: Unified, Real-Time Object Detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 779–788. Redmon, J., Divvala, S., Girshick, R. et al. You Only Look Once: Unified, Real-Time Object Detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 779–788.
go back to reference Ren S, He K, Girshick R. et al Faster R-CNN: Towards real-time object detection with region proposal networks, arXiv preprint arXiv:1506.01497. Ren S, He K, Girshick R. et al Faster R-CNN: Towards real-time object detection with region proposal networks, arXiv preprint arXiv:​1506.​01497.
go back to reference Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C. Chen (2018) Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. arXiv preprint arXiv:1801.04381 Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C. Chen (2018) Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. arXiv preprint arXiv:​1801.​04381
go back to reference Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556,. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:​1409.​1556,.
go back to reference Szegedy C, Liu W, Jia Y, Sermanet P, Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A (2015) Going deeper with convolutions, In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 1–9. Szegedy C, Liu W, Jia Y, Sermanet P, Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A (2015) Going deeper with convolutions, In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 1–9.
go back to reference Szegedy C, Vanhoucke Ioffe V, Shlens S, Wojna J, Z (2016) Rethinking the inception architecture for computer vision, In: IEEE conference on computer vision and pattern recognition (CVPR), , pp. 2818–2826. Szegedy C, Vanhoucke Ioffe V, Shlens S, Wojna J, Z (2016) Rethinking the inception architecture for computer vision, In: IEEE conference on computer vision and pattern recognition (CVPR), , pp. 2818–2826.
go back to reference Tung F, Mori G (2019) Similarity-preserving knowledge distillation. In Proceedings of the IEEE international conference on computer vision 1365–1374. Tung F, Mori G (2019) Similarity-preserving knowledge distillation. In Proceedings of the IEEE international conference on computer vision 1365–1374.
go back to reference Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The caltech-ucsd birds-200–2011 dataset Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The caltech-ucsd birds-200–2011 dataset
go back to reference Wang W, Zhu L (2020) Structured feature sparsity training for convolutional neural network compression. J vis Commun Image Represent 71:102867CrossRef Wang W, Zhu L (2020) Structured feature sparsity training for convolutional neural network compression. J vis Commun Image Represent 71:102867CrossRef
go back to reference Wang W, Zhu L, Guo B (2019) Reliable identification of redundant kernels for convolutional neural network compression. J vis Commun Image Represent 63:102582CrossRef Wang W, Zhu L, Guo B (2019) Reliable identification of redundant kernels for convolutional neural network compression. J vis Commun Image Represent 63:102582CrossRef
go back to reference Wang RJ, Li X. and Ling CX (2018) Pelee: A real-time object detection system on mobile devices. Advances in neural information processing systems 31. Wang RJ, Li X. and Ling CX (2018) Pelee: A real-time object detection system on mobile devices. Advances in neural information processing systems 31.
go back to reference Wen W, Wu C, Wang Y, Chen Y, Li H (2016) Learning structured sparsity in deep neural networks. In: advances in neural information processing systems., pp. 2074–2082. Wen W, Wu C, Wang Y, Chen Y, Li H (2016) Learning structured sparsity in deep neural networks. In: advances in neural information processing systems., pp. 2074–2082.
go back to reference White, C., Neiswanger, W. and Savani, Y. Bananas:Bayesian optimization with neural architectures for neural architecture search. In AAAI, 2021 White, C., Neiswanger, W. and Savani, Y. Bananas:Bayesian optimization with neural architectures for neural architecture search. In AAAI, 2021
go back to reference Xie S, Zheng H, Liu C. and Lin L (2019) SNAS: stochastic neural architecture search. In international conference on learning representations Xie S, Zheng H, Liu C. and Lin L (2019) SNAS: stochastic neural architecture search. In international conference on learning representations
go back to reference Yin Z, Yiu V, Hu X et al (2021) End-to-end face parsing via interlinked convolutional neural networks. Cogn Neurodyn 15:169–179CrossRefPubMed Yin Z, Yiu V, Hu X et al (2021) End-to-end face parsing via interlinked convolutional neural networks. Cogn Neurodyn 15:169–179CrossRefPubMed
go back to reference Zhang X, Zhou X, Lin M, Sun J (2017) Shufflenet: An extremely efficient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083 Zhang X, Zhou X, Lin M, Sun J (2017) Shufflenet: An extremely efficient convolutional neural network for mobile devices. arXiv preprint arXiv:​1707.​01083
go back to reference Zhang C, Ren M. and Urtasun, R (2019) Graph hypernetworks for neural architecture search. In international conference on learning representations (ICLR) Zhang C, Ren M. and Urtasun, R (2019) Graph hypernetworks for neural architecture search. In international conference on learning representations (ICLR)
go back to reference Zhou A, Yao A, Wang K. et al (2018) Explicit loss-error-aware quantization for low-bit deep neural networks, In: Proceedings of the IEEE conference on computer vision and pattern recognition. 9426–9435. Zhou A, Yao A, Wang K. et al (2018) Explicit loss-error-aware quantization for low-bit deep neural networks, In: Proceedings of the IEEE conference on computer vision and pattern recognition. 9426–9435.
go back to reference Zoph B, Le QV (2017) Neural architecture search with reinforcement learning. In international conference on learning representations (ICLR) Zoph B, Le QV (2017) Neural architecture search with reinforcement learning. In international conference on learning representations (ICLR)
go back to reference Zoph B, Vasudevan V, J. Shlens and Le, Q. V. (2018)Learning transferable architectures for scalable image recognition. In conference on computer vision and pattern recognition Zoph B, Vasudevan V, J. Shlens and Le, Q. V. (2018)Learning transferable architectures for scalable image recognition. In conference on computer vision and pattern recognition
Metadata
Title
DRF-DRC: dynamic receptive field and dense residual connections for model compression
Authors
Wei Wang
Yongde Zhang
Liqiang Zhu
Publication date
14-11-2022
Publisher
Springer Netherlands
Published in
Cognitive Neurodynamics / Issue 6/2023
Print ISSN: 1871-4080
Electronic ISSN: 1871-4099
DOI
https://doi.org/10.1007/s11571-022-09913-z

Other articles of this Issue 6/2023

Cognitive Neurodynamics 6/2023 Go to the issue