Skip to main content

2022 | OriginalPaper | Buchkapitel

Multi-Exit Semantic Segmentation Networks

verfasst von : Alexandros Kouris, Stylianos I. Venieris, Stefanos Laskaridis, Nicholas Lane

Erschienen in: Computer Vision – ECCV 2022

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Semantic segmentation arises as the backbone of many vision systems, spanning from self-driving cars and robot navigation to augmented reality and teleconferencing. Frequently operating under stringent latency constraints within a limited resource envelope, optimising for efficient execution becomes important. At the same time, the heterogeneous capabilities of the target platforms and the diverse constraints of different applications require the design and training of multiple target-specific segmentation models, leading to excessive maintenance costs. To this end, we propose a framework for converting state-of-the-art segmentation CNNs to Multi-Exit Semantic Segmentation (MESS) networks: specially trained models that employ parametrised early exits along their depth to i) dynamically save computation during inference on easier samples and ii) save training and maintenance cost by offering a post-training customisable speed-accuracy trade-off. Designing and training such networks naively can hurt performance. Thus, we propose a novel two-staged training scheme for multi-exit networks. Furthermore, the parametrisation of MESS enables co-optimising the number, placement and architecture of the attached segmentation heads along with the exit policy, upon deployment via exhaustive search in <1 GPUh. This allows MESS to rapidly adapt to the device capabilities and application requirements for each target use-case, offering a train-once-deploy-everywhere solution. MESS variants achieve latency gains of up to 2.83\(\times \) with the same accuracy, or 5.33 pp higher accuracy for the same computational budget, compared to the original backbone network. Lastly, MESS delivers orders of magnitude faster architectural customisation, compared to state-of-the-art techniques.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
e.g. Dilated Residual Blocks for ResNet-based [17] backbones, Inverted Residual Blocks for MobileNet-based [49] backbones etc.
 
2
Evaluated in a held-out Calibration Set during search (equally sized to the target Test Set).
 
Literatur
1.
Zurück zum Zitat Almeida, M., Laskaridis, S., Leontiadis, I., Venieris, S.I., Lane, N.D.: EmBench: quantifying performance variations of deep neural networks across modern commodity devices. In: The 3rd International Workshop on Deep Learning for Mobile Systems and Applications (EMDL) (2019) Almeida, M., Laskaridis, S., Leontiadis, I., Venieris, S.I., Lane, N.D.: EmBench: quantifying performance variations of deep neural networks across modern commodity devices. In: The 3rd International Workshop on Deep Learning for Mobile Systems and Applications (EMDL) (2019)
2.
Zurück zum Zitat Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 39(12), 2481–2495 (2017)CrossRef Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 39(12), 2481–2495 (2017)CrossRef
3.
Zurück zum Zitat Bolukbasi, T., Wang, J., Dekel, O., Saligrama, V.: Adaptive neural networks for efficient inference. In: International Conference on Machine Learning (ICML), pp. 527–536 (2017) Bolukbasi, T., Wang, J., Dekel, O., Saligrama, V.: Adaptive neural networks for efficient inference. In: International Conference on Machine Learning (ICML), pp. 527–536 (2017)
4.
Zurück zum Zitat Chen, L.-C., et al.: Searching for efficient multi-scale architectures for dense image prediction. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 8699–8710 (2018) Chen, L.-C., et al.: Searching for efficient multi-scale architectures for dense image prediction. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 8699–8710 (2018)
5.
Zurück zum Zitat Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 40(4), 834–848 (2017)CrossRef Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 40(4), 834–848 (2017)CrossRef
6.
Zurück zum Zitat Chen, L.-C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017) Chen, L.-C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:​1706.​05587 (2017)
7.
8.
Zurück zum Zitat Cheng, F., Zhang, H., Yuan, D., Sun, M.: Leveraging semantic segmentation with learning-based confidence measure. Neurocomputing 329, 21–31 (2019)CrossRef Cheng, F., Zhang, H., Yuan, D., Sun, M.: Leveraging semantic segmentation with learning-based confidence measure. Neurocomputing 329, 21–31 (2019)CrossRef
9.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009)
10.
Zurück zum Zitat Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. (IJCV) 88(2), 303–338 (2010)CrossRef Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. (IJCV) 88(2), 303–338 (2010)CrossRef
11.
Zurück zum Zitat Fang, B., Zeng, X., Zhang, M.: NestDNN: resource-aware multi-tenant on-device deep learning for continuous mobile vision. In: Annual International Conference on Mobile Computing and Networking (MobiCom), pp. 115–127 (2018) Fang, B., Zeng, X., Zhang, M.: NestDNN: resource-aware multi-tenant on-device deep learning for continuous mobile vision. In: Annual International Conference on Mobile Computing and Networking (MobiCom), pp. 115–127 (2018)
12.
Zurück zum Zitat Figurnov, M.: Spatially adaptive computation time for residual networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1039–1048 (2017) Figurnov, M.: Spatially adaptive computation time for residual networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1039–1048 (2017)
13.
Zurück zum Zitat Gao, X., Zhao, Y., Dudziak, Ł., Mullins, R., Xu, C.Z.: Dynamic channel pruning: feature boosting and suppression. In: International Conference on Learning Representations (ICLR) (2019) Gao, X., Zhao, Y., Dudziak, Ł., Mullins, R., Xu, C.Z.: Dynamic channel pruning: feature boosting and suppression. In: International Conference on Learning Representations (ICLR) (2019)
15.
Zurück zum Zitat Ghosh, S., Das, N., Das, I., Maulik, U.: Understanding deep learning techniques for image segmentation. ACM Comput. Surv. (CSUR) 52(4), 1–35 (2019)CrossRef Ghosh, S., Das, N., Das, I., Maulik, U.: Understanding deep learning techniques for image segmentation. ACM Comput. Surv. (CSUR) 52(4), 1–35 (2019)CrossRef
16.
Zurück zum Zitat Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: International Conference on Computer Vision (ICCV), pp. 991–998 (2011) Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: International Conference on Computer Vision (ICCV), pp. 991–998 (2011)
17.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
18.
Zurück zum Zitat Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NeurIPS 2014 Deep Learning Workshop (2014) Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NeurIPS 2014 Deep Learning Workshop (2014)
19.
Zurück zum Zitat Hua, W., Zhou, Y., De Sa, C.M., Zhang, Z., Edward Suh, G.: Channel gating neural networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 1886–1896 (2019) Hua, W., Zhou, Y., De Sa, C.M., Zhang, Z., Edward Suh, G.: Channel gating neural networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 1886–1896 (2019)
20.
Zurück zum Zitat Huang, G., Chen, D., Li, T., Wu, F., van der Maaten, L., Weinberger, K.: Multi-scale dense networks for resource efficient image classification. In: International Conference on Learning Representations (ICLR) (2018) Huang, G., Chen, D., Li, T., Wu, F., van der Maaten, L., Weinberger, K.: Multi-scale dense networks for resource efficient image classification. In: International Conference on Learning Representations (ICLR) (2018)
21.
Zurück zum Zitat Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708 (2017) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708 (2017)
22.
Zurück zum Zitat Ignatov, A., et al.: AI benchmark: all about deep learning on smartphones in 2019. In: International Conference on Computer Vision (ICCV) Workshops (2019) Ignatov, A., et al.: AI benchmark: all about deep learning on smartphones in 2019. In: International Conference on Computer Vision (ICCV) Workshops (2019)
23.
Zurück zum Zitat Jiang, J., Wang, X., Long, M., Wang, J.: Resource efficient domain adaptation. In: ACM International Conference on Multimedia (MM) (2020) Jiang, J., Wang, X., Long, M., Wang, J.: Resource efficient domain adaptation. In: ACM International Conference on Multimedia (MM) (2020)
24.
Zurück zum Zitat Kaya, Y., Hong, S., Dumitras, T.: Shallow-deep networks: understanding and mitigating network overthinking. In: International Conference on Machine Learning (ICML) (2019) Kaya, Y., Hong, S., Dumitras, T.: Shallow-deep networks: understanding and mitigating network overthinking. In: International Conference on Machine Learning (ICML) (2019)
25.
Zurück zum Zitat Laskaridis, S., Kouris, A., Lane, N.D.: Adaptive inference through early-exit networks: design, challenges and directions. In: Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning (EMDL), pp. 1–6 (2021) Laskaridis, S., Kouris, A., Lane, N.D.: Adaptive inference through early-exit networks: design, challenges and directions. In: Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning (EMDL), pp. 1–6 (2021)
26.
Zurück zum Zitat Laskaridis, S., Venieris, S.I., Almeida, M., Leontiadis, I., Lane, N.D.: SPINN: synergistic progressive inference of neural networks over device and cloud. In: Annual International Conference on Mobile Computing and Networking (MobiCom). ACM (2020) Laskaridis, S., Venieris, S.I., Almeida, M., Leontiadis, I., Lane, N.D.: SPINN: synergistic progressive inference of neural networks over device and cloud. In: Annual International Conference on Mobile Computing and Networking (MobiCom). ACM (2020)
27.
Zurück zum Zitat Laskaridis, S., Venieris, S.I., Kim, H., Lane, N.D.: HAPI: hardware-aware progressive inference. In: International Conference on Computer-Aided Design (ICCAD) (2020) Laskaridis, S., Venieris, S.I., Kim, H., Lane, N.D.: HAPI: hardware-aware progressive inference. In: International Conference on Computer-Aided Design (ICCAD) (2020)
28.
Zurück zum Zitat Leontiadis, I., Laskaridis, S., Venieris, S.I., Lane, N.D.: It’s always personal: using early exits for efficient on-device CNN personalisation. In: Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications (HotMobile) (2021) Leontiadis, I., Laskaridis, S., Venieris, S.I., Lane, N.D.: It’s always personal: using early exits for efficient on-device CNN personalisation. In: Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications (HotMobile) (2021)
29.
Zurück zum Zitat Li, H., Zhang, H., Qi, X., Yang, R., Huang, G.: Improved techniques for training adaptive deep networks. In: IEEE International Conference on Computer Vision (ICCV) (2019) Li, H., Zhang, H., Qi, X., Yang, R., Huang, G.: Improved techniques for training adaptive deep networks. In: IEEE International Conference on Computer Vision (ICCV) (2019)
30.
Zurück zum Zitat Li, X., Liu, Z., Luo, P., Loy, C.C., Tang, X.: Not all pixels are equal: difficulty-aware semantic segmentation via deep layer cascade. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3193–3202 (2017) Li, X., Liu, Z., Luo, P., Loy, C.C., Tang, X.: Not all pixels are equal: difficulty-aware semantic segmentation via deep layer cascade. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3193–3202 (2017)
31.
Zurück zum Zitat Li, Y., et al.: Learning dynamic routing for semantic segmentation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8553–8562 (2020) Li, Y., et al.: Learning dynamic routing for semantic segmentation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8553–8562 (2020)
32.
Zurück zum Zitat Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1925–1934 (2017) Lin, G., Milan, A., Shen, C., Reid, I.: RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1925–1934 (2017)
33.
Zurück zum Zitat Lin, J., Rao, Y., Lu, J., Zhou, J.: Runtime neural pruning. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2181–2191 (2017) Lin, J., Rao, Y., Lu, J., Zhou, J.: Runtime neural pruning. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2181–2191 (2017)
35.
Zurück zum Zitat Liu, C., et al.: Auto-DeepLab: hierarchical neural architecture search for semantic image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 82–92 (2019) Liu, C., et al.: Auto-DeepLab: hierarchical neural architecture search for semantic image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 82–92 (2019)
36.
Zurück zum Zitat Liu, L., Li, H., Gruteser, M.: Edge assisted real-time object detection for mobile augmented reality. In: Annual International Conference on Mobile Computing and Networking (MobiCom) (2019) Liu, L., Li, H., Gruteser, M.: Edge assisted real-time object detection for mobile augmented reality. In: Annual International Conference on Mobile Computing and Networking (MobiCom) (2019)
37.
Zurück zum Zitat Liu, Y., Chen, K., Liu, C., Qin, Z., Luo, Z., Wang, J.: Structured knowledge distillation for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Liu, Y., Chen, K., Liu, C., Qin, Z., Luo, Z., Wang, J.: Structured knowledge distillation for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
38.
Zurück zum Zitat Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)
39.
Zurück zum Zitat Luan, Y., Zhao, H., Yang, Z., Dai, Y.: MSD: multi-self-distillation learning via multi-classifiers within deep neural networks. arXiv:1911.09418 (2019) Luan, Y., Zhao, H., Yang, Z., Dai, Y.: MSD: multi-self-distillation learning via multi-classifiers within deep neural networks. arXiv:​1911.​09418 (2019)
40.
Zurück zum Zitat Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. In: NIPSW on Adversarial Training (2016) Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. In: NIPSW on Adversarial Training (2016)
41.
Zurück zum Zitat McCormac, J., Handa, A., Davison, A., Leutenegger, S.: SemanticFusion: dense 3D semantic mapping with convolutional neural networks. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 4628–4635. IEEE (2017) McCormac, J., Handa, A., Davison, A., Leutenegger, S.: SemanticFusion: dense 3D semantic mapping with convolutional neural networks. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 4628–4635. IEEE (2017)
42.
Zurück zum Zitat Mehta, S., Rastegari, M., Caspi, A., Shapiro, L., Hajishirzi, H.: ESPNet: efficient spatial pyramid of dilated convolutions for semantic segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 561–580. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_34CrossRef Mehta, S., Rastegari, M., Caspi, A., Shapiro, L., Hajishirzi, H.: ESPNet: efficient spatial pyramid of dilated convolutions for semantic segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 561–580. Springer, Cham (2018). https://​doi.​org/​10.​1007/​978-3-030-01249-6_​34CrossRef
43.
Zurück zum Zitat Nekrasov, V., Chen, H., Shen, C., Reid, I.: Fast neural architecture search of compact semantic segmentation models via auxiliary cells. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9126–9135 (2019) Nekrasov, V., Chen, H., Shen, C., Reid, I.: Fast neural architecture search of compact semantic segmentation models via auxiliary cells. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9126–9135 (2019)
44.
Zurück zum Zitat Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: IEEE International Conference on Computer Vision (ICCV), pp. 1520–1528 (2015) Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: IEEE International Conference on Computer Vision (ICCV), pp. 1520–1528 (2015)
46.
Zurück zum Zitat Peng, C., Zhang, X., Yu, G., Luo, G., Sun, J.: Large kernel matters-improve semantic segmentation by global convolutional network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4353–4361 (2017) Peng, C., Zhang, X., Yu, G., Luo, G., Sun, J.: Large kernel matters-improve semantic segmentation by global convolutional network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4353–4361 (2017)
47.
Zurück zum Zitat Phuong, M., Lampert, C.H.: Distillation-based training for multi-exit architectures. In: IEEE International Conference on Computer Vision (ICCV), pp. 1355–1364 (2019) Phuong, M., Lampert, C.H.: Distillation-based training for multi-exit architectures. In: IEEE International Conference on Computer Vision (ICCV), pp. 1355–1364 (2019)
49.
Zurück zum Zitat Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: MobileNetV2: inverted residuals and linear bottlenecks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520 (2018) Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: MobileNetV2: inverted residuals and linear bottlenecks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520 (2018)
50.
Zurück zum Zitat Siam, M., Gamal, M., Abdel-Razek, M., Yogamani, S., Jagersand, M., Zhang, H.: A comparative study of real-time semantic segmentation for autonomous driving. In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2018) Siam, M., Gamal, M., Abdel-Razek, M., Yogamani, S., Jagersand, M., Zhang, H.: A comparative study of real-time semantic segmentation for autonomous driving. In: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2018)
51.
Zurück zum Zitat Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015) Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
52.
Zurück zum Zitat Teerapittayanon, S., McDanel, B., Kung, H.-T.: BranchyNet: fast inference via early exiting from deep neural networks. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2464–2469. IEEE (2016) Teerapittayanon, S., McDanel, B., Kung, H.-T.: BranchyNet: fast inference via early exiting from deep neural networks. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2464–2469. IEEE (2016)
54.
Zurück zum Zitat Vu, T.-H., Jain, H., Bucher, M., Cord, M., Pérez, P.: ADVENT: adversarial entropy minimization for domain adaptation in semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2517–2526 (2019) Vu, T.-H., Jain, H., Bucher, M., Cord, M., Pérez, P.: ADVENT: adversarial entropy minimization for domain adaptation in semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2517–2526 (2019)
56.
Zurück zum Zitat Wang, Y., Zhang, X., Hu, X., Zhang, B., Su, H.: Dynamic network pruning with interpretable layerwise channel selection. In: AAAI Conference on Artificial Intelligence (AAAI), pp. 6299–6306 (2020) Wang, Y., Zhang, X., Hu, X., Zhang, B., Su, H.: Dynamic network pruning with interpretable layerwise channel selection. In: AAAI Conference on Artificial Intelligence (AAAI), pp. 6299–6306 (2020)
57.
Zurück zum Zitat Wu, H., Zhang, J., Huang, K., Liang, K., Yizhou, Y.: FastFCN: rethinking dilated convolution in the backbone for semantic segmentation. arXiv preprint arXiv:1903.11816 (2019) Wu, H., Zhang, J., Huang, K., Liang, K., Yizhou, Y.: FastFCN: rethinking dilated convolution in the backbone for semantic segmentation. arXiv preprint arXiv:​1903.​11816 (2019)
58.
Zurück zum Zitat Wu, Z., et al.: BlockDrop: dynamic inference paths in residual networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8817–8826 (2018) Wu, Z., et al.: BlockDrop: dynamic inference paths in residual networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8817–8826 (2018)
59.
Zurück zum Zitat Xin, J., Tang, R., Lee, J., Yu, Y., Lin, J.: DeeBERT: dynamic early exiting for accelerating BERT inference. In: 58th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 2246–2251 (2020) Xin, J., Tang, R., Lee, J., Yu, Y., Lin, J.: DeeBERT: dynamic early exiting for accelerating BERT inference. In: 58th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 2246–2251 (2020)
61.
Zurück zum Zitat Xu, H., Gao, Y., Yu, F., Darrell, T.: End-to-end learning of driving models from large-scale video datasets. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2174–2182 (2017) Xu, H., Gao, Y., Yu, F., Darrell, T.: End-to-end learning of driving models from large-scale video datasets. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2174–2182 (2017)
62.
Zurück zum Zitat Yao, Z., Cao, S., Xiao, W., Zhang, C., Nie, L.: Balanced sparsity for efficient DNN inference on GPU. In: AAAI Conference on Artificial Intelligence (AAAI) 33, pp. 5676–5683 (2019) Yao, Z., Cao, S., Xiao, W., Zhang, C., Nie, L.: Balanced sparsity for efficient DNN inference on GPU. In: AAAI Conference on Artificial Intelligence (AAAI) 33, pp. 5676–5683 (2019)
63.
Zurück zum Zitat Yi, J., Lee, Y.: Heimdall: mobile GPU coordination platform for augmented reality applications. In: Annual International Conference on Mobile Computing and Networking (MobiCom) (2020) Yi, J., Lee, Y.: Heimdall: mobile GPU coordination platform for augmented reality applications. In: Annual International Conference on Mobile Computing and Networking (MobiCom) (2020)
65.
Zurück zum Zitat Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (ICLR) (2016) Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (ICLR) (2016)
66.
Zurück zum Zitat Yu, F., Koltun, V., Funkhouser, T.: Dilated residual networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 472–480 (2017) Yu, F., Koltun, V., Funkhouser, T.: Dilated residual networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 472–480 (2017)
67.
69.
Zurück zum Zitat Zeng, D., et al.: Towards cardiac intervention assistance: hardware-aware neural architecture exploration for real-time 3D cardiac cine MRI segmentation. In: ACM/IEEE International Conference on Computer-Aided Design (ICCAD) (2020) Zeng, D., et al.: Towards cardiac intervention assistance: hardware-aware neural architecture exploration for real-time 3D cardiac cine MRI segmentation. In: ACM/IEEE International Conference on Computer-Aided Design (ICCAD) (2020)
70.
Zurück zum Zitat Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., Ma, K.: Be your own teacher: improve the performance of convolutional neural networks via self distillation. In: IEEE International Conference on Computer Vision (ICCV) (2019) Zhang, L., Song, J., Gao, A., Chen, J., Bao, C., Ma, K.: Be your own teacher: improve the performance of convolutional neural networks via self distillation. In: IEEE International Conference on Computer Vision (ICCV) (2019)
71.
Zurück zum Zitat Zhang, L., Tan, Z., Song, J., Chen, J., Bao, C., Ma, K.: SCAN: a scalable neural networks framework towards compact and efficient models. In: Advances in Neural Information Processing Systems (NeurIPS) (2019) Zhang, L., Tan, Z., Song, J., Chen, J., Bao, C., Ma, K.: SCAN: a scalable neural networks framework towards compact and efficient models. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)
73.
Zurück zum Zitat Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2881–2890 (2017) Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2881–2890 (2017)
74.
Zurück zum Zitat Zhou, Z., Chen, X., Li, E., Zeng, L., Luo, K., Zhang, J.: Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc. IEEE 107(8), 1738–1762 (2019)CrossRef Zhou, Z., Chen, X., Li, E., Zeng, L., Luo, K., Zhang, J.: Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc. IEEE 107(8), 1738–1762 (2019)CrossRef
Metadaten
Titel
Multi-Exit Semantic Segmentation Networks
verfasst von
Alexandros Kouris
Stylianos I. Venieris
Stefanos Laskaridis
Nicholas Lane
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-031-19803-8_20

Premium Partner