Skip to main content

2024 | OriginalPaper | Buchkapitel

An Efficient Approach for No Reference Image Quality Assessment (NR-IQA) Index Using Autoencoder-Based Regression Model (ARM)

verfasst von : Milind S. Patil, Pradip B. Mane

Erschienen in: Advances in Photonics and Electronics

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In a broad variety of multimedia applications, the accuracy of perceptual quality evaluation is crucial. The purpose of image quality assessment is to mimic human subjective visual perception and to automate the process of image quality inference. In contrast, current NR-IQA systems judge quality simply on the distorted picture, rather than taking into account the influence of the environment on human perception. The no-reference quality evaluation aims to automatically evaluate the perceived quality of the final result when a single picture is created by fusing together a number of band images. This research provides a novel no-reference image quality evaluation approach for satellite image fusion methods. The measure leverages regressor-based autoencoders throughout the assessment process. In the suggested technique, features are derived using the relationship metric and parameters that relate the quality from the pixel of the fused picture by the fine-tuned encoder. Finally, in order to assess and quantify picture quality decline, these components are regressed to quality scores and concatenated. The experimental findings proved that the proposed NR-IQA technique outperforms the existing state of the art on a broad range of NR-IQA datasets, making it suitable for satellite image classification and distortion-type identification tasks.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat S. Ali Amirshahi, M. Pedersen, S.X. Yu, Image quality assessment by comparing cnn features between images. Electron. Imaging. 2017(12), 42–51 (2017) S. Ali Amirshahi, M. Pedersen, S.X. Yu, Image quality assessment by comparing cnn features between images. Electron. Imaging. 2017(12), 42–51 (2017)
Zurück zum Zitat S. Bosse, D. Maniry, K.-R. Müller, T. Wiegand, W. Samek, Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017) CrossRef S. Bosse, D. Maniry, K.-R. Müller, T. Wiegand, W. Samek, Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017) CrossRef
Zurück zum Zitat S. Bosse, D. Maniry, Klaus-Robert. M¨uller, T. Wiegand, W. Samek. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017) S. Bosse, D. Maniry, Klaus-Robert. M¨uller, T. Wiegand, W. Samek. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017)
Zurück zum Zitat Z. Chen, J. Lin, N. Liao, C.W. Chen, Full reference quality assessment for image retargeting based on natural scene statistics modeling and bi-directional saliency similarity. IEEE Trans. Image Process. 26(11), 5138–5148 (2017)CrossRef Z. Chen, J. Lin, N. Liao, C.W. Chen, Full reference quality assessment for image retargeting based on natural scene statistics modeling and bi-directional saliency similarity. IEEE Trans. Image Process. 26(11), 5138–5148 (2017)CrossRef
Zurück zum Zitat A. Dositskiy, L. Beyer, A. Kolesnikov, D.Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale (2020), arXiv preprint arXiv:2010.11929 A. Dositskiy, L. Beyer, A. Kolesnikov, D.Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale (2020), arXiv preprint arXiv:​2010.​11929
Zurück zum Zitat H. Fan, B. Xiong, K. Mangalam, Y. Li, Z. Yan, J. Malik, et al., Multiscale vision transformers (2021) H. Fan, B. Xiong, K. Mangalam, Y. Li, Z. Yan, J. Malik, et al., Multiscale vision transformers (2021)
Zurück zum Zitat F. Gao, Y. Wang, P. Li, M. Tan, J. Yu, Y. Zhu, Deepsim: Deep similarity for image quality assessment. Neurocomputing 257, 104–114 (2017)CrossRef F. Gao, Y. Wang, P. Li, M. Tan, J. Yu, Y. Zhu, Deepsim: Deep similarity for image quality assessment. Neurocomputing 257, 104–114 (2017)CrossRef
Zurück zum Zitat S.A Golestaneh, D.M Chandler, No-reference quality assessment of jpeg images via a quality relevance map. IEEE Signal Process. Lett. 21(2). (IEEE), 155–158 (2013) S.A Golestaneh, D.M Chandler, No-reference quality assessment of jpeg images via a quality relevance map. IEEE Signal Process. Lett. 21(2). (IEEE), 155–158 (2013)
Zurück zum Zitat K. Gu, G. Zhai, X. Yang, W. Zhang, Using free energy principle for blind image quality assessment. IEEE Trans. Multimedia 17(1), 50–63 (2015)CrossRef K. Gu, G. Zhai, X. Yang, W. Zhang, Using free energy principle for blind image quality assessment. IEEE Trans. Multimedia 17(1), 50–63 (2015)CrossRef
Zurück zum Zitat J. Guan, S. Yi, X. Zeng, W.-K. Cham, X. Wang, Visual importance and distortion guided deep image quality assessment framework. IEEE Trans. Multimedia 19(11), 2505–2520 (2017)CrossRef J. Guan, S. Yi, X. Zeng, W.-K. Cham, X. Wang, Visual importance and distortion guided deep image quality assessment framework. IEEE Trans. Multimedia 19(11), 2505–2520 (2017)CrossRef
Zurück zum Zitat R. Hassen, Z. Wang, M.M. Salama, Image sharpness assessment based on local phase coherence. IEEE Trans. Image Process. 22(7), 2798–2810 (2013)CrossRef R. Hassen, Z. Wang, M.M. Salama, Image sharpness assessment based on local phase coherence. IEEE Trans. Image Process. 22(7), 2798–2810 (2013)CrossRef
Zurück zum Zitat K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2016, 770–778 (2016) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2016, 770–778 (2016)
Zurück zum Zitat Q. Jiang, F. Shao, W. Lin, K. Gu, G. Jiang, H. Sun, Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Trans. Multimedia 20(8), 2035–2048 (2017)CrossRef Q. Jiang, F. Shao, W. Lin, K. Gu, G. Jiang, H. Sun, Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Trans. Multimedia 20(8), 2035–2048 (2017)CrossRef
Zurück zum Zitat L. Kang, P. Ye, Y. Li, D. Doermann, Convolutional neural networks for no-reference image quality assessment, in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1733–1740 (2014) L. Kang, P. Ye, Y. Li, D. Doermann, Convolutional neural networks for no-reference image quality assessment, in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1733–1740 (2014)
Zurück zum Zitat L. Li, H. Zhu, G. Yang, J. Qian, Referenceless measure of blocking artifacts by tchebichef kernel analysis. IEEE Signal Process. Lett. 21(1), 122–125 (2013)CrossRef L. Li, H. Zhu, G. Yang, J. Qian, Referenceless measure of blocking artifacts by tchebichef kernel analysis. IEEE Signal Process. Lett. 21(1), 122–125 (2013)CrossRef
Zurück zum Zitat L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, A.C. Kot, Noreference image blur assessment based on discrete orthogonal moments. IEEE Trans. Cybern. 46(1), 39–50 (2015)CrossRef L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, A.C. Kot, Noreference image blur assessment based on discrete orthogonal moments. IEEE Trans. Cybern. 46(1), 39–50 (2015)CrossRef
Zurück zum Zitat H. Lin, V. Hosu, D. Saupe, Kadid-10k: A large-scale artificially distorted iqa database, in 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, pp. 1–3 (2019) H. Lin, V. Hosu, D. Saupe, Kadid-10k: A large-scale artificially distorted iqa database, in 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, pp. 1–3 (2019)
Zurück zum Zitat H. Liu, N. Klomp, I. Heynderickx, A no-reference metric for perceived ringing artifacts in images. IEEE Trans. Circuits Syst. Video Technol. 20(4), 529–539 (2009)CrossRef H. Liu, N. Klomp, I. Heynderickx, A no-reference metric for perceived ringing artifacts in images. IEEE Trans. Circuits Syst. Video Technol. 20(4), 529–539 (2009)CrossRef
Zurück zum Zitat Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows (2021), arXiv preprint arXiv:2103.14030 Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows (2021), arXiv preprint arXiv:​2103.​14030
Zurück zum Zitat X. Ma, S. Zhang, Y. Wang, R. Li, X. Chen, D. Yu, ASCAM-Former: Blind image quality assessment based on adaptive spatial & channel attention merging transformer and image to patch weights sharing. Expert Systems with Applications. Expert. Syst. Appl.: Int. J. 215(Apr 2023) (2022) X. Ma, S. Zhang, Y. Wang, R. Li, X. Chen, D. Yu, ASCAM-Former: Blind image quality assessment based on adaptive spatial & channel attention merging transformer and image to patch weights sharing. Expert Systems with Applications. Expert. Syst. Appl.: Int. J. 215(Apr 2023) (2022)
Zurück zum Zitat A. Mittal, A. Krishna Moorthy, A. Conrad Bovik, No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012) A. Mittal, A. Krishna Moorthy, A. Conrad Bovik, No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)
Zurück zum Zitat L. Shi, W. Zhou, Z. Chen, J. Zhang, No-reference light field image quality assessment based on spatial-angular measurement. IEEE Trans. Circuits Syst. Video Technol. 30(11), 4114–4128 (2020)CrossRef L. Shi, W. Zhou, Z. Chen, J. Zhang, No-reference light field image quality assessment based on spatial-angular measurement. IEEE Trans. Circuits Syst. Video Technol. 30(11), 4114–4128 (2020)CrossRef
Zurück zum Zitat K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in ICLR 2015: International Conference on Learning Representations (2015) K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in ICLR 2015: International Conference on Learning Representations (2015)
Zurück zum Zitat C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2015, 1–9 (2015) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2015, 1–9 (2015)
Zurück zum Zitat W. Xue, X. Mou, L. Zhang, A.C. Bovik, X. Feng, Blind image quality assessment using joint statistics of gradient magnitude and laplacian features. IEEE Trans. Image Process. 23(11), 4850–4862 (2014)CrossRef W. Xue, X. Mou, L. Zhang, A.C. Bovik, X. Feng, Blind image quality assessment using joint statistics of gradient magnitude and laplacian features. IEEE Trans. Image Process. 23(11), 4850–4862 (2014)CrossRef
Zurück zum Zitat B. Yan, B. Bare, W. Tan, Naturalness-aware deep no-reference image quality assessment. IEEE Trans. Multimedia 21(10), 2603–2615 (2019)CrossRef B. Yan, B. Bare, W. Tan, Naturalness-aware deep no-reference image quality assessment. IEEE Trans. Multimedia 21(10), 2603–2615 (2019)CrossRef
Zurück zum Zitat W. Zhang, K. Ma, G. Zhai, X. Yang, Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans. Image Process. 30, 3474–3486 (2021)CrossRef W. Zhang, K. Ma, G. Zhai, X. Yang, Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans. Image Process. 30, 3474–3486 (2021)CrossRef
Zurück zum Zitat W. Zhang, G. Zhai, Y. Wei, X. Yang, K. Ma, Blind image quality assessment via vision-language correspondence: a multitask learning perspective (2023) W. Zhang, G. Zhai, Y. Wei, X. Yang, K. Ma, Blind image quality assessment via vision-language correspondence: a multitask learning perspective (2023)
Zurück zum Zitat K. Zhao, K. Yuan, M. Sun, M. Li, X. Wen, Quality-aware Pre-trained Models for Blind Image Quality Assessment, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023) K. Zhao, K. Yuan, M. Sun, M. Li, X. Wen, Quality-aware Pre-trained Models for Blind Image Quality Assessment, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Metadaten
Titel
An Efficient Approach for No Reference Image Quality Assessment (NR-IQA) Index Using Autoencoder-Based Regression Model (ARM)
verfasst von
Milind S. Patil
Pradip B. Mane
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-68038-0_16