Skip to main content
Erschienen in: International Journal of Computer Vision 11/2023

25.07.2023

Hierarchical Curriculum Learning for No-Reference Image Quality Assessment

verfasst von: Juan Wang, Zewen Chen, Chunfeng Yuan, Bing Li, Wentao Ma, Weiming Hu

Erschienen in: International Journal of Computer Vision | Ausgabe 11/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Despite remarkable success has been achieved by convolutional neural networks (CNNs) in no-reference image quality assessment (NR-IQA), there still exist many challenges in improving the performance of IQA for authentically distorted images. An important factor is that the insufficient annotated data limits the training of high-capacity CNNs to accommodate diverse distortions, complicated semantic structures and high-variance quality scores of these images. To address this problem, this paper proposes a hierarchical curriculum learning (HCL) framework for NR-IQA. The main idea of the proposed framework is to leverage the external data to learn the prior knowledge about IQA widely and progressively. Specifically, as a closely-related task with NR-IQA, image restoration is used as the first curriculum to learn the image quality related knowledge (i.e., semantic and distortion information) on massive distorted-reference image pairs. Then multiple lightweight subnetworks are designed to learn human scoring rules on multiple available synthetic IQA datasets independently, and a cross-dataset quality assessment correlation (CQAC) module is proposed to fully explore the similarities and diversities of different scoring rules. Finally, the whole model is fine-tuned on the target authentic IQA dataset to fuse the learned knowledge and adapt to the target data distribution. Experimental results show that our model achieves state-of-the-art performance on multiple standard authentic IQA datasets. Moreover, the generalization of our model is fully validated by the cross-dataset evaluation and the gMAD competition. In addition, extensive analyses prove that the proposed HCL framework is effective in improving the performance of our model.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat A, T. D., B, K. G., C, L. N., et al. (2018). Referenceless quality metric of multiply-distorted images based on structural degradation. Neurocomputing, 290, 185–195.CrossRef A, T. D., B, K. G., C, L. N., et al. (2018). Referenceless quality metric of multiply-distorted images based on structural degradation. Neurocomputing, 290, 185–195.CrossRef
Zurück zum Zitat Bosse, S., Maniry, D., Müller, K. R., et al. (2017). Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on Image Processing (TIP), 27(1), 206–219.MathSciNetCrossRefMATH Bosse, S., Maniry, D., Müller, K. R., et al. (2017). Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on Image Processing (TIP), 27(1), 206–219.MathSciNetCrossRefMATH
Zurück zum Zitat Ciancio, A., da Silva, E. A., Said, A., et al. (2010). No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Processing (TIP), 20(1), 64–75.MathSciNetCrossRefMATH Ciancio, A., da Silva, E. A., Said, A., et al. (2010). No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Processing (TIP), 20(1), 64–75.MathSciNetCrossRefMATH
Zurück zum Zitat Deng, J., Dong, W., Socher, R., et al. (2009). Imagenet: A large-scale hierarchical image database. In Conference on computer vision and pattern recognition (CVPR) (pp. 248–255). Deng, J., Dong, W., Socher, R., et al. (2009). Imagenet: A large-scale hierarchical image database. In Conference on computer vision and pattern recognition (CVPR) (pp. 248–255).
Zurück zum Zitat Fang, Y., Zhu, H., Zeng, Y., et al. (2020). Perceptual quality assessment of smartphone photography. In Conference on computer vision and pattern recognition (CVPR). Fang, Y., Zhu, H., Zeng, Y., et al. (2020). Perceptual quality assessment of smartphone photography. In Conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Ghadiyaram, D., & Bovik, A. C. (2015). Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing (TIP), 25(1), 372–387.MathSciNetCrossRefMATH Ghadiyaram, D., & Bovik, A. C. (2015). Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing (TIP), 25(1), 372–387.MathSciNetCrossRefMATH
Zurück zum Zitat Ghadiyaram, D., & Bovik, A. C. (2017). Perceptual quality prediction on authentically distorted images using a bag of features approach. Journal of Vision, 17(1), 32–32.CrossRef Ghadiyaram, D., & Bovik, A. C. (2017). Perceptual quality prediction on authentically distorted images using a bag of features approach. Journal of Vision, 17(1), 32–32.CrossRef
Zurück zum Zitat He, K., Zhang, X., Ren, S., et al. (2016). Deep residual learning for image recognition. In Conference on computer vision and pattern recognition (CVPR) (pp. 770–778). He, K., Zhang, X., Ren, S., et al. (2016). Deep residual learning for image recognition. In Conference on computer vision and pattern recognition (CVPR) (pp. 770–778).
Zurück zum Zitat Hosu, V., Lin, H., Sziranyi, T., et al. (2020). Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing (TIP), 29, 4041–4056.CrossRefMATH Hosu, V., Lin, H., Sziranyi, T., et al. (2020). Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing (TIP), 29, 4041–4056.CrossRefMATH
Zurück zum Zitat Kang, L., Ye, P., Li, Y., et al. (2014). Convolutional neural networks for no-reference image quality assessment. In Conference on computer vision and pattern recognition (CVPR) (pp. 1733–1740). Kang, L., Ye, P., Li, Y., et al. (2014). Convolutional neural networks for no-reference image quality assessment. In Conference on computer vision and pattern recognition (CVPR) (pp. 1733–1740).
Zurück zum Zitat Kang, L., Ye, P., Li, Y., et al. (2015). Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In International conference on image processing (ICIP), IEEE (pp. 2791–2795). Kang, L., Ye, P., Li, Y., et al. (2015). Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In International conference on image processing (ICIP), IEEE (pp. 2791–2795).
Zurück zum Zitat Ke, J., Wang, Q., Wang, Y., et al. (2021). Musiq: Multi-scale image quality transformer. In International conference on computer vision (ICCV) (pp. 5148–5157). Ke, J., Wang, Q., Wang, Y., et al. (2021). Musiq: Multi-scale image quality transformer. In International conference on computer vision (ICCV) (pp. 5148–5157).
Zurück zum Zitat Kim, J., & Lee, S. (2016). Fully deep blind image quality predictor. IEEE Journal of Selected Topics in Signal Processing, 11(1), 206–220.CrossRef Kim, J., & Lee, S. (2016). Fully deep blind image quality predictor. IEEE Journal of Selected Topics in Signal Processing, 11(1), 206–220.CrossRef
Zurück zum Zitat Kim, J., Nguyen, A. D., & Lee, S. (2018). Deep cnn-based blind image quality predictor. IEEE Transactions on Neural Networks and Learning Systems, 30(1), 11–24.CrossRef Kim, J., Nguyen, A. D., & Lee, S. (2018). Deep cnn-based blind image quality predictor. IEEE Transactions on Neural Networks and Learning Systems, 30(1), 11–24.CrossRef
Zurück zum Zitat Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems (NeurIPS), 25, 1097–1105. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems (NeurIPS), 25, 1097–1105.
Zurück zum Zitat Larson, E. C., & Chandler, D. M. (2010). Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19(1), 011006.CrossRef Larson, E. C., & Chandler, D. M. (2010). Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19(1), 011006.CrossRef
Zurück zum Zitat Li, D., Jiang, T., Lin, W., et al. (2018). Which has better visual quality: The clear blue sky or a blurry animal? IEEE Transactions on Multimedia (TMM), 21(5), 1221–1234.CrossRef Li, D., Jiang, T., Lin, W., et al. (2018). Which has better visual quality: The clear blue sky or a blurry animal? IEEE Transactions on Multimedia (TMM), 21(5), 1221–1234.CrossRef
Zurück zum Zitat Li, D., Jiang, T., & Jiang, M. (2020). Norm-in-norm loss with faster convergence and better performance for image quality assessment. In ACM International conference on multimedia (ACM MM) (pp. 789–797). ACM. Li, D., Jiang, T., & Jiang, M. (2020). Norm-in-norm loss with faster convergence and better performance for image quality assessment. In ACM International conference on multimedia (ACM MM) (pp. 789–797). ACM.
Zurück zum Zitat Li, D., Jiang, T., & Jiang, M. (2021). Unified quality assessment of in-the-wild videos with mixed datasets training. International Journal of Computer Vision (IJCV), 129, 1238–1257.CrossRef Li, D., Jiang, T., & Jiang, M. (2021). Unified quality assessment of in-the-wild videos with mixed datasets training. International Journal of Computer Vision (IJCV), 129, 1238–1257.CrossRef
Zurück zum Zitat Li, Q., Lin, W., Xu, J., et al. (2016). Blind image quality assessment using statistical structural and luminance features. IEEE Transactions on Multimedia (TMM), 18(12), 2457–2469.CrossRef Li, Q., Lin, W., Xu, J., et al. (2016). Blind image quality assessment using statistical structural and luminance features. IEEE Transactions on Multimedia (TMM), 18(12), 2457–2469.CrossRef
Zurück zum Zitat Lin, H., Hosu, V., & Saupe, D. (2019). Kadid-10k: A large-scale artificially distorted iqa database. In Conference on quality of multimedia experience (QoMEX), IEEE (pp. 1–3). Lin, H., Hosu, V., & Saupe, D. (2019). Kadid-10k: A large-scale artificially distorted iqa database. In Conference on quality of multimedia experience (QoMEX), IEEE (pp. 1–3).
Zurück zum Zitat Lin, K. Y., & Wang, G. (2018) Hallucinated-iqa: No-reference image quality assessment via adversarial learning. In Conference on computer vision and pattern recognition (CVPR) (pp. 732–741). Lin, K. Y., & Wang, G. (2018) Hallucinated-iqa: No-reference image quality assessment via adversarial learning. In Conference on computer vision and pattern recognition (CVPR) (pp. 732–741).
Zurück zum Zitat Liu, L., Dong, H., Huang, H., et al. (2014). No-reference image quality assessment in curvelet domain. Signal Processing: Image Communication, 29(4), 494–505. Liu, L., Dong, H., Huang, H., et al. (2014). No-reference image quality assessment in curvelet domain. Signal Processing: Image Communication, 29(4), 494–505.
Zurück zum Zitat Liu, X., Van De Weijer, J., & Bagdanov, A. D. (2017). Rankiqa: Learning from rankings for no-reference image quality assessment. In International conference on computer vision (ICCV) (pp. 1040–1049). Liu, X., Van De Weijer, J., & Bagdanov, A. D. (2017). Rankiqa: Learning from rankings for no-reference image quality assessment. In International conference on computer vision (ICCV) (pp. 1040–1049).
Zurück zum Zitat Ma, J., Wu, J., Li, L., et al. (2021). Blind image quality assessment with active inference. IEEE Transactions on Image Processing (TIP), 30, 3650–3663.CrossRef Ma, J., Wu, J., Li, L., et al. (2021). Blind image quality assessment with active inference. IEEE Transactions on Image Processing (TIP), 30, 3650–3663.CrossRef
Zurück zum Zitat Ma, K., Duanmu, Z., Wu, Q., et al. (2016). Waterloo exploration database: New challenges for image quality assessment models. IEEE Transactions on Image Processing (TIP), 26(2), 1004–1016.MathSciNetCrossRefMATH Ma, K., Duanmu, Z., Wu, Q., et al. (2016). Waterloo exploration database: New challenges for image quality assessment models. IEEE Transactions on Image Processing (TIP), 26(2), 1004–1016.MathSciNetCrossRefMATH
Zurück zum Zitat Ma, K., Liu, W., Liu, T., et al. (2017). dipiq: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Transactions on Image Processing (TIP), 26(8), 3951–3964.MathSciNetCrossRefMATH Ma, K., Liu, W., Liu, T., et al. (2017). dipiq: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Transactions on Image Processing (TIP), 26(8), 3951–3964.MathSciNetCrossRefMATH
Zurück zum Zitat Ma, K., Duanmu, Z., Zhou, W., et al. (2018). Group maximum differentiation competition: Model comparison with few samples. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), PP(99), 1–1. Ma, K., Duanmu, Z., Zhou, W., et al. (2018). Group maximum differentiation competition: Model comparison with few samples. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), PP(99), 1–1.
Zurück zum Zitat Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing (TIP), 21(12), 4695–4708.MathSciNetCrossRefMATH Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing (TIP), 21(12), 4695–4708.MathSciNetCrossRefMATH
Zurück zum Zitat Ponomarenko, N., Ieremeiev, O., Lukin, V., et al. (2013). Color image database tid2013: Peculiarities and preliminary results. In European workshop on visual information processing (EUVIP) (pp. 106–111) Ponomarenko, N., Ieremeiev, O., Lukin, V., et al. (2013). Color image database tid2013: Peculiarities and preliminary results. In European workshop on visual information processing (EUVIP) (pp. 106–111)
Zurück zum Zitat Ren, H., Chen, D., & Wang, Y. (2018). Ran4iqa: Restorative adversarial nets for no-reference image quality assessment. In AAAI Conference on artificial intelligence (AAAI). Ren, H., Chen, D., & Wang, Y. (2018). Ran4iqa: Restorative adversarial nets for no-reference image quality assessment. In AAAI Conference on artificial intelligence (AAAI).
Zurück zum Zitat Saad, M. A., Bovik, A. C., & Charrier, C. (2012). Blind image quality assessment: A natural scene statistics approach in the dct domain. IEEE Transactions on Image Processing (TIP), 21(8), 3339–3352.MathSciNetCrossRefMATH Saad, M. A., Bovik, A. C., & Charrier, C. (2012). Blind image quality assessment: A natural scene statistics approach in the dct domain. IEEE Transactions on Image Processing (TIP), 21(8), 3339–3352.MathSciNetCrossRefMATH
Zurück zum Zitat Sheikh, H. R., Sabir, M. F., & Bovik, A. C. (2006). A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing (TIP), 15(11), 3440–3451.CrossRef Sheikh, H. R., Sabir, M. F., & Bovik, A. C. (2006). A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing (TIP), 15(11), 3440–3451.CrossRef
Zurück zum Zitat Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:​1409.​1556
Zurück zum Zitat Song, T., Li, L., Zhu, H., et al. (2021). Ie-iqa: Intelligibility enriched generalizable no-reference image quality assessment. Frontiers in Neuroscience, 15, 739138.CrossRef Song, T., Li, L., Zhu, H., et al. (2021). Ie-iqa: Intelligibility enriched generalizable no-reference image quality assessment. Frontiers in Neuroscience, 15, 739138.CrossRef
Zurück zum Zitat Su, S., Yan, Q., Zhu, Y., et al. (2020). Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Conference on computer vision and pattern recognition (CVPR). Su, S., Yan, Q., Zhu, Y., et al. (2020). Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (ICML) (pp. 6105–6114). Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (ICML) (pp. 6105–6114).
Zurück zum Zitat Thomee, B., Shamma, D. A., Friedland, G., et al. (2016). Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2), 64–73.CrossRef Thomee, B., Shamma, D. A., Friedland, G., et al. (2016). Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2), 64–73.CrossRef
Zurück zum Zitat Touvron, H., Cord, M., Douze, M., et al. (2021). Training data-efficient image transformers & distillation through attention. In International conference on machine learning (ICML) (pp. 10,347–10,357) Touvron, H., Cord, M., Douze, M., et al. (2021). Training data-efficient image transformers & distillation through attention. In International conference on machine learning (ICML) (pp. 10,347–10,357)
Zurück zum Zitat Wang, Z., & Ma, K. (2021). Active fine-tuning from gmad examples improves blind image quality assessment. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Wang, Z., & Ma, K. (2021). Active fine-tuning from gmad examples improves blind image quality assessment. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).
Zurück zum Zitat Wang, Z., Bovik, A. C., Sheikh, H. R., et al. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing (TIP), 13(4), 600–612.CrossRef Wang, Z., Bovik, A. C., Sheikh, H. R., et al. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing (TIP), 13(4), 600–612.CrossRef
Zurück zum Zitat Wu, Q., Wang, Z., & Li, H. (2015). A highly efficient method for blind image quality assessment. In International conference on image processing (ICIP) (pp. 339–343). IEEE. Wu, Q., Wang, Z., & Li, H. (2015). A highly efficient method for blind image quality assessment. In International conference on image processing (ICIP) (pp. 339–343). IEEE.
Zurück zum Zitat Xu, J., Ye, P., Li, Q., et al. (2016). Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing (TIP), 25(9), 4444–4457.MathSciNetCrossRefMATH Xu, J., Ye, P., Li, Q., et al. (2016). Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing (TIP), 25(9), 4444–4457.MathSciNetCrossRefMATH
Zurück zum Zitat Yin, G., Wang, W., Yuan, Z., et al. (2022). Content-variant reference image quality assessment via knowledge distillation. In AAAI Conference on artificial intelligence (AAAI). Yin, G., Wang, W., Yuan, Z., et al. (2022). Content-variant reference image quality assessment via knowledge distillation. In AAAI Conference on artificial intelligence (AAAI).
Zurück zum Zitat Zeng, H., Zhang, L., & Bovik, A. C. (2017). A probabilistic quality representation approach to deep blind image quality prediction. arXiv preprint arXiv:1708.08190 Zeng, H., Zhang, L., & Bovik, A. C. (2017). A probabilistic quality representation approach to deep blind image quality prediction. arXiv preprint arXiv:​1708.​08190
Zurück zum Zitat Zhang, L., Zhang, L., & Bovik, A. C. (2015). A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing (TIP), 24(8), 2579–2591.MathSciNetCrossRefMATH Zhang, L., Zhang, L., & Bovik, A. C. (2015). A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing (TIP), 24(8), 2579–2591.MathSciNetCrossRefMATH
Zurück zum Zitat Zhang, W., Ma, K., Yan, J., et al. (2018). Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 30(1), 36–47.CrossRef Zhang, W., Ma, K., Yan, J., et al. (2018). Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 30(1), 36–47.CrossRef
Zurück zum Zitat Zhang, W., Ma, K., Zhai, G., et al. (2021). Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Transactions on Image Processing (TIP), 30, 3474–3486. Zhang, W., Ma, K., Zhai, G., et al. (2021). Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Transactions on Image Processing (TIP), 30, 3474–3486.
Zurück zum Zitat Zhao, K., Yuan, K., Sun, M., et al. (2023). Quality-aware pre-trained models for blind image quality assessment. In Conference on computer vision and pattern recognition (CVPR). Zhao, K., Yuan, K., Sun, M., et al. (2023). Quality-aware pre-trained models for blind image quality assessment. In Conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Zhu, H., Li, L., Wu, J., et al. (2020). Metaiqa: Deep meta-learning for no-reference image quality assessment. In Conference on computer vision and pattern recognition (pp. 14,143–14,152). Zhu, H., Li, L., Wu, J., et al. (2020). Metaiqa: Deep meta-learning for no-reference image quality assessment. In Conference on computer vision and pattern recognition (pp. 14,143–14,152).
Metadaten
Titel
Hierarchical Curriculum Learning for No-Reference Image Quality Assessment
verfasst von
Juan Wang
Zewen Chen
Chunfeng Yuan
Bing Li
Wentao Ma
Weiming Hu
Publikationsdatum
25.07.2023
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 11/2023
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01851-5

Weitere Artikel der Ausgabe 11/2023

International Journal of Computer Vision 11/2023 Zur Ausgabe

Premium Partner