Skip to main content

03.12.2024

Lightweight Low-Power U-Net Architecture for Semantic Segmentation

verfasst von: Chaitanya Modiboyina, Indrajit Chakrabarti, Soumya Kanti Ghosh

Erschienen in: Circuits, Systems, and Signal Processing

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The U-Net is a popular deep-learning model for semantic segmentation tasks. This paper describes an implementation of the U-Net architecture on FPGA (Field Programmable Gate Array) for real-time image segmentation. The proposed design uses a parallel-pipelined architecture to achieve high throughput and also focuses on addressing the resource and power constraints in edge devices by compressing CNN (Convolutional Neural Networks) models and improving hardware efficiency. To this end, we propose a pruning technique based on parallel quantization that reduces weight storage requirements by quantizing U-Net layers into a few segments, which in turn leads to the light weight of the U-Net model. The system requires \(\approx 1.5Mb\) of memory for storing weights. The Electron Microscopy Dataset and BraTs Dataset has demonstrated the proposed U-Net architecture, achieving an Intersection over Union (IoU) of 90.31% and 94.1% when utilizing 4-bit quantized weights. Additionally, we designed a shift-based U-Net accelerator that replaces multiplications with simple shift operations, further improving efficiency. The proposed U-Net architecture achieves a 3.5 \(\times\) reduction in power consumption and a 35% reduction in area compared to previous architectures. To further reduce power consumption, we omit the computation for zero weights. Overall, the present work puts forward an effective method for optimizing CNN models in edge devices while meeting their computational and power constraints.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

ATZelektronik

Die Fachzeitschrift ATZelektronik bietet für Entwickler und Entscheider in der Automobil- und Zulieferindustrie qualitativ hochwertige und fundierte Informationen aus dem gesamten Spektrum der Pkw- und Nutzfahrzeug-Elektronik. 

Lassen Sie sich jetzt unverbindlich 2 kostenlose Ausgabe zusenden.

ATZelectronics worldwide

ATZlectronics worldwide is up-to-speed on new trends and developments in automotive electronics on a scientific level with a high depth of information. 

Order your 30-days-trial for free and without any commitment.

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat A. Ardakani, C. Condo, M. Ahmadi, W.J. Gross, An architecture to accelerate convolution in deep neural networks. IEEE Trans. Circuits Syst. I Regul. Pap. 65(4), 1349–1362 (2018)CrossRef A. Ardakani, C. Condo, M. Ahmadi, W.J. Gross, An architecture to accelerate convolution in deep neural networks. IEEE Trans. Circuits Syst. I Regul. Pap. 65(4), 1349–1362 (2018)CrossRef
2.
Zurück zum Zitat L. Bai, Y. Zhao, X. Huang, A CNN accelerator on FPGA using depthwise separable convolution. IEEE Trans. Circuits Syst. II Express Briefs 65(10), 1415–1419 (2018) L. Bai, Y. Zhao, X. Huang, A CNN accelerator on FPGA using depthwise separable convolution. IEEE Trans. Circuits Syst. II Express Briefs 65(10), 1415–1419 (2018)
3.
Zurück zum Zitat A. Bulat, G. Tzimiropoulos, Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources. in 2017 IEEE International Conference on Computer Vision (ICCV), pp.3726–3734 (2017). A. Bulat, G. Tzimiropoulos, Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources. in 2017 IEEE International Conference on Computer Vision (ICCV), pp.3726–3734 (2017).
4.
Zurück zum Zitat Y.-H. Chen, T. Krishna, J.S. Emer, V. Sze, Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52(1), 127–138 (2017)CrossRef Y.-H. Chen, T. Krishna, J.S. Emer, V. Sze, Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52(1), 127–138 (2017)CrossRef
5.
Zurück zum Zitat M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, Y. Bengio, Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv preprint http://arxiv.org/abs/1602.02830 (2016). M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, Y. Bengio, Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv preprint http://​arxiv.​org/​abs/​1602.​02830 (2016).
6.
Zurück zum Zitat C. Dechesne, P. Lassalle, S. Lefèvre, Bayesian u-net: Estimating uncertainty in semantic segmentation of earth observation images. Remote Sensing. 13(19), 3836 (2021)CrossRef C. Dechesne, P. Lassalle, S. Lefèvre, Bayesian u-net: Estimating uncertainty in semantic segmentation of earth observation images. Remote Sensing. 13(19), 3836 (2021)CrossRef
7.
Zurück zum Zitat A. Esmaeilzehi, L. Ma, M. O. Ahmad, Towards Analyzing the Robustness of Deep Light-weight Image Super Resolution Networks under Distribution Shift. in 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), pp. 1–6 (2022). A. Esmaeilzehi, L. Ma, M. O. Ahmad, Towards Analyzing the Robustness of Deep Light-weight Image Super Resolution Networks under Distribution Shift. in 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), pp. 1–6 (2022).
8.
Zurück zum Zitat A. Esmaeilzehi, M. O. Ahmad, M. N. S. Swamy, Srnmfrb: A Deep Light-Weight Super Resolution Network Using Multi-Receptive Field Feature Generation Residual Blocks. in 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2020). A. Esmaeilzehi, M. O. Ahmad, M. N. S. Swamy, Srnmfrb: A Deep Light-Weight Super Resolution Network Using Multi-Receptive Field Feature Generation Residual Blocks. in 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2020).
9.
Zurück zum Zitat A. Esmaeilzehi, M. O. Ahmad, M. N. S. Swamy, FPNet: A Deep Light-Weight Interpretable Neural Network Using Forward Prediction Filtering for Efficient Single Image Super Resolution. IEEE Trans. Circuits Syst II: Express Briefs, 69(3), 1937–1941 (2021). A. Esmaeilzehi, M. O. Ahmad, M. N. S. Swamy, FPNet: A Deep Light-Weight Interpretable Neural Network Using Forward Prediction Filtering for Efficient Single Image Super Resolution. IEEE Trans. Circuits Syst II: Express Briefs, 69(3), 1937–1941 (2021).
10.
Zurück zum Zitat A. Esmaeilzehi, M.O. Ahmad, M.N.S. Swamy, Ultralight-Weight Three-Prior Convolutional Neural Network for Single Image Super Resolution. IEEE Trans. Artificial Intelligence 4(6), 1724–1738 (2023)CrossRef A. Esmaeilzehi, M.O. Ahmad, M.N.S. Swamy, Ultralight-Weight Three-Prior Convolutional Neural Network for Single Image Super Resolution. IEEE Trans. Artificial Intelligence 4(6), 1724–1738 (2023)CrossRef
11.
Zurück zum Zitat S. Fang, L. Tian, J. Wang, S. Liang, D. Xie, Z. Chen, L. Sui, Q. Yu, X. Sun, Y. Shan, Y. Wang, Real-time object detection and semantic segmentation hardware system with deep learning networks. in 2018 International Conference on Field-Programmable Technology (FPT), pp. 389–392 (2018). S. Fang, L. Tian, J. Wang, S. Liang, D. Xie, Z. Chen, L. Sui, Q. Yu, X. Sun, Y. Shan, Y. Wang, Real-time object detection and semantic segmentation hardware system with deep learning networks. in 2018 International Conference on Field-Programmable Technology (FPT), pp. 389–392 (2018).
12.
Zurück zum Zitat K. Guo, L. Sui, J. Qiu, J. Yu, J. Wang, S. Yao, S. Han, Y. Wang, H. Yang, Angel-eye: a complete design flow for mapping cnn onto embedded fpga. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 37(1), 35–47 (2018)CrossRef K. Guo, L. Sui, J. Qiu, J. Yu, J. Wang, S. Yao, S. Han, Y. Wang, H. Yang, Angel-eye: a complete design flow for mapping cnn onto embedded fpga. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 37(1), 35–47 (2018)CrossRef
13.
Zurück zum Zitat S. Han, J. Pool, J. Tran, W.J. Dally, Learning both weights and connections for efficient neural networks. in Proceedings of the 28th International Conference on Neural Information Processing Systems (NeurIPS), pp. 1135–1143 (2015). S. Han, J. Pool, J. Tran, W.J. Dally, Learning both weights and connections for efficient neural networks. in Proceedings of the 28th International Conference on Neural Information Processing Systems (NeurIPS), pp. 1135–1143 (2015).
14.
Zurück zum Zitat H. Huang, Y. Wu, M. Yu, X. Shi, F. Qiao, L. Luo, Q. Wei, X. Liu, Edssa: An encoder-decoder semantic segmentation networks accelerator on opencl-based fpga platform. Sensors 20(14), 3969 (2020)CrossRef H. Huang, Y. Wu, M. Yu, X. Shi, F. Qiao, L. Luo, Q. Wei, X. Liu, Edssa: An encoder-decoder semantic segmentation networks accelerator on opencl-based fpga platform. Sensors 20(14), 3969 (2020)CrossRef
15.
Zurück zum Zitat W. Jia, J. Cui, X. Zheng, Q. Wu, Design and implementation of real-time semantic segmentation network based on fpga. in Proceedings of the 2021 7th International Conference on Computing and Artificial Intelligence (ICAIIC), pp. 321–325 (2021). W. Jia, J. Cui, X. Zheng, Q. Wu, Design and implementation of real-time semantic segmentation network based on fpga. in Proceedings of the 2021 7th International Conference on Computing and Artificial Intelligence (ICAIIC), pp. 321–325 (2021).
16.
Zurück zum Zitat H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. Peter Graf, Pruning filters for efficient convnets. in International Conference on Learning Representations (ICLR), pp. 1–13 (2017). H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. Peter Graf, Pruning filters for efficient convnets. in International Conference on Learning Representations (ICLR), pp. 1–13 (2017).
17.
Zurück zum Zitat H.-J. Lin, C.-A. Shen, The data flow and architectural optimizations for a highly efficient cnn accelerator based on the depthwise separable convolution. Circuits Syst. Signal Process 41, 3547–3569 (2022)CrossRef H.-J. Lin, C.-A. Shen, The data flow and architectural optimizations for a highly efficient cnn accelerator based on the depthwise separable convolution. Circuits Syst. Signal Process 41, 3547–3569 (2022)CrossRef
18.
Zurück zum Zitat H.-W. Liu, C.-A. Shen, The design of efficient data flow and low-complexity architecture for a highly configurable cnn accelerator. Circuits Syst. Signal Process 42, 4759–4783 (2023)CrossRef H.-W. Liu, C.-A. Shen, The design of efficient data flow and low-complexity architecture for a highly configurable cnn accelerator. Circuits Syst. Signal Process 42, 4759–4783 (2023)CrossRef
19.
Zurück zum Zitat S. Liu, H. Fan, X. Niu, H.-C. Ng, Y. Chu, W. Luk, Optimizing cnn-based segmentation with deeply customized convolutional and deconvolutional architectures on fpga. ACM Trans. Reconfigurable Technol. Syst. 11(3) (2018). S. Liu, H. Fan, X. Niu, H.-C. Ng, Y. Chu, W. Luk, Optimizing cnn-based segmentation with deeply customized convolutional and deconvolutional architectures on fpga. ACM Trans. Reconfigurable Technol. Syst. 11(3) (2018).
20.
Zurück zum Zitat S. Liu, W. Luk, Towards an efficient accelerator for dnn-based remote sensing image segmentation on fpgas. in 2019 29th International Conference on Field Programmable Logic and Applications (FPL), pp. 187–193 (2019). S. Liu, W. Luk, Towards an efficient accelerator for dnn-based remote sensing image segmentation on fpgas. in 2019 29th International Conference on Field Programmable Logic and Applications (FPL), pp. 187–193 (2019).
21.
Zurück zum Zitat N. Ma, X. Zhang, H.-T. Zheng, J. Sun, Shufflenet v2: Practical guidelines for efficient cnn architecture design. in Proceedings of the European Conference on Computer Vision (ECCV), pp. 116–131 (2018). N. Ma, X. Zhang, H.-T. Zheng, J. Sun, Shufflenet v2: Practical guidelines for efficient cnn architecture design. in Proceedings of the European Conference on Computer Vision (ECCV), pp. 116–131 (2018).
22.
Zurück zum Zitat F. Milletari, N. Navab, S.-A. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation. in Proceedings of the Fourth International Conference on 3D Vision (3DV), pp. 565–571 (2016). F. Milletari, N. Navab, S.-A. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation. in Proceedings of the Fourth International Conference on 3D Vision (3DV), pp. 565–571 (2016).
23.
Zurück zum Zitat M. Mubashir, H. Ali, C. Grönlund, S. Azmat, R2u++: A multiscale recurrent residual u-net with dense skip connections for medical image segmentation. Neural Comput. Appl. 34(20), 17723–17739 (2022)CrossRef M. Mubashir, H. Ali, C. Grönlund, S. Azmat, R2u++: A multiscale recurrent residual u-net with dense skip connections for medical image segmentation. Neural Comput. Appl. 34(20), 17723–17739 (2022)CrossRef
24.
Zurück zum Zitat D.-T. Nguyen, T.N. Nguyen, H. Kim, H.-J. Lee, A high-throughput and power efficient fpga implementation of yolo cnn for object detection. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 27(8), 1861–1873 (2019)CrossRef D.-T. Nguyen, T.N. Nguyen, H. Kim, H.-J. Lee, A high-throughput and power efficient fpga implementation of yolo cnn for object detection. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 27(8), 1861–1873 (2019)CrossRef
26.
Zurück zum Zitat D. Przewlocka-Rus, T. Kryjak. 2023. Energy efficient hardware acceleration of neural networks with power-of-two quantisation. in Internation Conference on Computer Vision and Graphics (ICCVG). Springer. Cham. 225–236 D. Przewlocka-Rus, T. Kryjak. 2023. Energy efficient hardware acceleration of neural networks with power-of-two quantisation. in Internation Conference on Computer Vision and Graphics (ICCVG). Springer. Cham. 225–236
27.
Zurück zum Zitat N.S. Punn, S. Agarwal, Inception u-net architecture for semantic segmentation to identify nuclei in microscopy cell images. ACM Trans. Multimedia Comput. Commun. Appl. 16(1), 1–15 (2020)CrossRef N.S. Punn, S. Agarwal, Inception u-net architecture for semantic segmentation to identify nuclei in microscopy cell images. ACM Trans. Multimedia Comput. Commun. Appl. 16(1), 1–15 (2020)CrossRef
28.
Zurück zum Zitat M. Rastegari, V. Ordonez, J. Redmon, A. Farhadi, Xnor-net: Imagenet classification using binary convolutional neural networks. in Proceedings of European Conference on Computer Vision (ECCV), pp. 525–542 (2016). M. Rastegari, V. Ordonez, J. Redmon, A. Farhadi, Xnor-net: Imagenet classification using binary convolutional neural networks. in Proceedings of European Conference on Computer Vision (ECCV), pp. 525–542 (2016).
29.
Zurück zum Zitat G. Raut, J. Mukala, V. Sharma, S.K. Vishvakarma, Designing a performance-centric mac unit with pipelined architecture for dnn accelerators. Circuits Syst. Signal Process 42, 6089–6115 (2023)CrossRef G. Raut, J. Mukala, V. Sharma, S.K. Vishvakarma, Designing a performance-centric mac unit with pipelined architecture for dnn accelerators. Circuits Syst. Signal Process 42, 6089–6115 (2023)CrossRef
30.
Zurück zum Zitat O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 234–241. Springer, Cham (2015). O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 234–241. Springer, Cham (2015).
31.
Zurück zum Zitat L. Rundo, C. Han, Y. Nagano, J. Zhang, R. Hataya, C. Militello, A. Tangherloni, M.S. Nobile, C. Ferretti, D. Besozzi, M.C. Gilardi, S. Vitabile, G. Mauri, H. Nakayama, P. Cazzaniga, Use-net: Incorporating squeeze-and-excitation blocks into u-net for prostate zonal segmentation of multi-institutional mri datasets. Neuro computing 365, 31–43 (2019) L. Rundo, C. Han, Y. Nagano, J. Zhang, R. Hataya, C. Militello, A. Tangherloni, M.S. Nobile, C. Ferretti, D. Besozzi, M.C. Gilardi, S. Vitabile, G. Mauri, H. Nakayama, P. Cazzaniga, Use-net: Incorporating squeeze-and-excitation blocks into u-net for prostate zonal segmentation of multi-institutional mri datasets. Neuro computing 365, 31–43 (2019)
32.
Zurück zum Zitat N. Sambyal, P. Saini, R. Syal, V. Gupta, Modified u-net architecture for semantic segmentation of diabetic retinopathy images. Biocybern. Biomed. Eng. 40(3), 1094–1109 (2020)CrossRef N. Sambyal, P. Saini, R. Syal, V. Gupta, Modified u-net architecture for semantic segmentation of diabetic retinopathy images. Biocybern. Biomed. Eng. 40(3), 1094–1109 (2020)CrossRef
33.
Zurück zum Zitat M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen: Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520 (2018). M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen: Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520 (2018).
34.
Zurück zum Zitat N. Siddique, S. Paheding, C.P. Elkin, V. Devabhaktuni, U-net and its variants for medical image segmentation: A review of theory and applications. IEEE Access 9, 82031–82057 (2021)CrossRef N. Siddique, S. Paheding, C.P. Elkin, V. Devabhaktuni, U-net and its variants for medical image segmentation: A review of theory and applications. IEEE Access 9, 82031–82057 (2021)CrossRef
35.
Zurück zum Zitat H. Song, Y.Wang, S. Zeng, X. Guo, Z. Li, Oau-net: Outlined attention u-net for biomedical image segmentation. Biomed. Signal Process.Control 79 (2023). H. Song, Y.Wang, S. Zeng, X. Guo, Z. Li, Oau-net: Outlined attention u-net for biomedical image segmentation. Biomed. Signal Process.Control 79 (2023).
36.
Zurück zum Zitat R. Stahl, A. Hoffman, D. Mueller Gritschneder, A. Gerstlauer, U. Schlichtmann, Deeperthings: fully distributed cnn inference on resourceconstrained edge devices. Int. J. Parallel Program 49, 600–624 (2021)CrossRef R. Stahl, A. Hoffman, D. Mueller Gritschneder, A. Gerstlauer, U. Schlichtmann, Deeperthings: fully distributed cnn inference on resourceconstrained edge devices. Int. J. Parallel Program 49, 600–624 (2021)CrossRef
37.
Zurück zum Zitat F. Sun et al., Circle-u-net: An efficient architecture for semantic segmentation. Algorithms. 14(6), 159 (2021)CrossRef F. Sun et al., Circle-u-net: An efficient architecture for semantic segmentation. Algorithms. 14(6), 159 (2021)CrossRef
38.
Zurück zum Zitat R. Szeliski, 2010. Computer Vision: Algorithms and Applications. Springer. Cham. 187–271 R. Szeliski, 2010. Computer Vision: Algorithms and Applications. Springer. Cham. 187–271
39.
Zurück zum Zitat F. Tu, S. Yin, P. Ouyang, S. Tang, L. Liu, S. Wei, Deep convolutional neural network architecture with reconfigurable computation patterns. IEEE Trans. Very Large Scale Integr. VLSI Syst. 25(8), 2220–2233 (2017)CrossRef F. Tu, S. Yin, P. Ouyang, S. Tang, L. Liu, S. Wei, Deep convolutional neural network architecture with reconfigurable computation patterns. IEEE Trans. Very Large Scale Integr. VLSI Syst. 25(8), 2220–2233 (2017)CrossRef
40.
Zurück zum Zitat V. Venkata Bhargava Narendra, P. Rangababu, B. K. Balabantaray. 2021. Lowpower u-net for semantic image segmentation. in Machine Learning Deep Learning and Computational Intelligence for Wireless Communication (MDCWC). Springer. Singapore. 473–491 V. Venkata Bhargava Narendra, P. Rangababu, B. K. Balabantaray. 2021. Lowpower u-net for semantic image segmentation. in Machine Learning Deep Learning and Computational Intelligence for Wireless Communication (MDCWC). Springer. Singapore. 473–491
42.
Zurück zum Zitat Y. Yu, C. Wu, T. Zhao, K. Wang, L. He, Opu: An fpga-based overlay processor for convolutional neural networks. IEEE Trans. Very Large Scale Integr. VLSI Syst. 28(1), 35–47 (2020)CrossRef Y. Yu, C. Wu, T. Zhao, K. Wang, L. He, Opu: An fpga-based overlay processor for convolutional neural networks. IEEE Trans. Very Large Scale Integr. VLSI Syst. 28(1), 35–47 (2020)CrossRef
43.
Zurück zum Zitat Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, J. Liang. (2018). Unet++: A nested u-net architecture for medical image segmentation. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11. Springer. Cham Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, J. Liang. (2018). Unet++: A nested u-net architecture for medical image segmentation. in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11. Springer. Cham
Metadaten
Titel
Lightweight Low-Power U-Net Architecture for Semantic Segmentation
verfasst von
Chaitanya Modiboyina
Indrajit Chakrabarti
Soumya Kanti Ghosh
Publikationsdatum
03.12.2024
Verlag
Springer US
Erschienen in
Circuits, Systems, and Signal Processing
Print ISSN: 0278-081X
Elektronische ISSN: 1531-5878
DOI
https://doi.org/10.1007/s00034-024-02920-x