Skip to main content
Top

2024 | OriginalPaper | Chapter

An Efficient Approach for No Reference Image Quality Assessment (NR-IQA) Index Using Autoencoder-Based Regression Model (ARM)

Authors : Milind S. Patil, Pradip B. Mane

Published in: Advances in Photonics and Electronics

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In a broad variety of multimedia applications, the accuracy of perceptual quality evaluation is crucial. The purpose of image quality assessment is to mimic human subjective visual perception and to automate the process of image quality inference. In contrast, current NR-IQA systems judge quality simply on the distorted picture, rather than taking into account the influence of the environment on human perception. The no-reference quality evaluation aims to automatically evaluate the perceived quality of the final result when a single picture is created by fusing together a number of band images. This research provides a novel no-reference image quality evaluation approach for satellite image fusion methods. The measure leverages regressor-based autoencoders throughout the assessment process. In the suggested technique, features are derived using the relationship metric and parameters that relate the quality from the pixel of the fused picture by the fine-tuned encoder. Finally, in order to assess and quantify picture quality decline, these components are regressed to quality scores and concatenated. The experimental findings proved that the proposed NR-IQA technique outperforms the existing state of the art on a broad range of NR-IQA datasets, making it suitable for satellite image classification and distortion-type identification tasks.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference S. Ali Amirshahi, M. Pedersen, S.X. Yu, Image quality assessment by comparing cnn features between images. Electron. Imaging. 2017(12), 42–51 (2017) S. Ali Amirshahi, M. Pedersen, S.X. Yu, Image quality assessment by comparing cnn features between images. Electron. Imaging. 2017(12), 42–51 (2017)
go back to reference S. Bosse, D. Maniry, K.-R. Müller, T. Wiegand, W. Samek, Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017) CrossRef S. Bosse, D. Maniry, K.-R. Müller, T. Wiegand, W. Samek, Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017) CrossRef
go back to reference S. Bosse, D. Maniry, Klaus-Robert. M¨uller, T. Wiegand, W. Samek. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017) S. Bosse, D. Maniry, Klaus-Robert. M¨uller, T. Wiegand, W. Samek. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2017)
go back to reference Z. Chen, J. Lin, N. Liao, C.W. Chen, Full reference quality assessment for image retargeting based on natural scene statistics modeling and bi-directional saliency similarity. IEEE Trans. Image Process. 26(11), 5138–5148 (2017)CrossRef Z. Chen, J. Lin, N. Liao, C.W. Chen, Full reference quality assessment for image retargeting based on natural scene statistics modeling and bi-directional saliency similarity. IEEE Trans. Image Process. 26(11), 5138–5148 (2017)CrossRef
go back to reference A. Dositskiy, L. Beyer, A. Kolesnikov, D.Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale (2020), arXiv preprint arXiv:2010.11929 A. Dositskiy, L. Beyer, A. Kolesnikov, D.Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale (2020), arXiv preprint arXiv:​2010.​11929
go back to reference H. Fan, B. Xiong, K. Mangalam, Y. Li, Z. Yan, J. Malik, et al., Multiscale vision transformers (2021) H. Fan, B. Xiong, K. Mangalam, Y. Li, Z. Yan, J. Malik, et al., Multiscale vision transformers (2021)
go back to reference F. Gao, Y. Wang, P. Li, M. Tan, J. Yu, Y. Zhu, Deepsim: Deep similarity for image quality assessment. Neurocomputing 257, 104–114 (2017)CrossRef F. Gao, Y. Wang, P. Li, M. Tan, J. Yu, Y. Zhu, Deepsim: Deep similarity for image quality assessment. Neurocomputing 257, 104–114 (2017)CrossRef
go back to reference S.A Golestaneh, D.M Chandler, No-reference quality assessment of jpeg images via a quality relevance map. IEEE Signal Process. Lett. 21(2). (IEEE), 155–158 (2013) S.A Golestaneh, D.M Chandler, No-reference quality assessment of jpeg images via a quality relevance map. IEEE Signal Process. Lett. 21(2). (IEEE), 155–158 (2013)
go back to reference K. Gu, G. Zhai, X. Yang, W. Zhang, Using free energy principle for blind image quality assessment. IEEE Trans. Multimedia 17(1), 50–63 (2015)CrossRef K. Gu, G. Zhai, X. Yang, W. Zhang, Using free energy principle for blind image quality assessment. IEEE Trans. Multimedia 17(1), 50–63 (2015)CrossRef
go back to reference J. Guan, S. Yi, X. Zeng, W.-K. Cham, X. Wang, Visual importance and distortion guided deep image quality assessment framework. IEEE Trans. Multimedia 19(11), 2505–2520 (2017)CrossRef J. Guan, S. Yi, X. Zeng, W.-K. Cham, X. Wang, Visual importance and distortion guided deep image quality assessment framework. IEEE Trans. Multimedia 19(11), 2505–2520 (2017)CrossRef
go back to reference R. Hassen, Z. Wang, M.M. Salama, Image sharpness assessment based on local phase coherence. IEEE Trans. Image Process. 22(7), 2798–2810 (2013)CrossRef R. Hassen, Z. Wang, M.M. Salama, Image sharpness assessment based on local phase coherence. IEEE Trans. Image Process. 22(7), 2798–2810 (2013)CrossRef
go back to reference K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2016, 770–778 (2016) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2016, 770–778 (2016)
go back to reference Q. Jiang, F. Shao, W. Lin, K. Gu, G. Jiang, H. Sun, Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Trans. Multimedia 20(8), 2035–2048 (2017)CrossRef Q. Jiang, F. Shao, W. Lin, K. Gu, G. Jiang, H. Sun, Optimizing multistage discriminative dictionaries for blind image quality assessment. IEEE Trans. Multimedia 20(8), 2035–2048 (2017)CrossRef
go back to reference L. Kang, P. Ye, Y. Li, D. Doermann, Convolutional neural networks for no-reference image quality assessment, in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1733–1740 (2014) L. Kang, P. Ye, Y. Li, D. Doermann, Convolutional neural networks for no-reference image quality assessment, in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1733–1740 (2014)
go back to reference L. Li, H. Zhu, G. Yang, J. Qian, Referenceless measure of blocking artifacts by tchebichef kernel analysis. IEEE Signal Process. Lett. 21(1), 122–125 (2013)CrossRef L. Li, H. Zhu, G. Yang, J. Qian, Referenceless measure of blocking artifacts by tchebichef kernel analysis. IEEE Signal Process. Lett. 21(1), 122–125 (2013)CrossRef
go back to reference L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, A.C. Kot, Noreference image blur assessment based on discrete orthogonal moments. IEEE Trans. Cybern. 46(1), 39–50 (2015)CrossRef L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, A.C. Kot, Noreference image blur assessment based on discrete orthogonal moments. IEEE Trans. Cybern. 46(1), 39–50 (2015)CrossRef
go back to reference H. Lin, V. Hosu, D. Saupe, Kadid-10k: A large-scale artificially distorted iqa database, in 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, pp. 1–3 (2019) H. Lin, V. Hosu, D. Saupe, Kadid-10k: A large-scale artificially distorted iqa database, in 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX). IEEE, pp. 1–3 (2019)
go back to reference H. Liu, N. Klomp, I. Heynderickx, A no-reference metric for perceived ringing artifacts in images. IEEE Trans. Circuits Syst. Video Technol. 20(4), 529–539 (2009)CrossRef H. Liu, N. Klomp, I. Heynderickx, A no-reference metric for perceived ringing artifacts in images. IEEE Trans. Circuits Syst. Video Technol. 20(4), 529–539 (2009)CrossRef
go back to reference Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows (2021), arXiv preprint arXiv:2103.14030 Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows (2021), arXiv preprint arXiv:​2103.​14030
go back to reference X. Ma, S. Zhang, Y. Wang, R. Li, X. Chen, D. Yu, ASCAM-Former: Blind image quality assessment based on adaptive spatial & channel attention merging transformer and image to patch weights sharing. Expert Systems with Applications. Expert. Syst. Appl.: Int. J. 215(Apr 2023) (2022) X. Ma, S. Zhang, Y. Wang, R. Li, X. Chen, D. Yu, ASCAM-Former: Blind image quality assessment based on adaptive spatial & channel attention merging transformer and image to patch weights sharing. Expert Systems with Applications. Expert. Syst. Appl.: Int. J. 215(Apr 2023) (2022)
go back to reference A. Mittal, A. Krishna Moorthy, A. Conrad Bovik, No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012) A. Mittal, A. Krishna Moorthy, A. Conrad Bovik, No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)
go back to reference L. Shi, W. Zhou, Z. Chen, J. Zhang, No-reference light field image quality assessment based on spatial-angular measurement. IEEE Trans. Circuits Syst. Video Technol. 30(11), 4114–4128 (2020)CrossRef L. Shi, W. Zhou, Z. Chen, J. Zhang, No-reference light field image quality assessment based on spatial-angular measurement. IEEE Trans. Circuits Syst. Video Technol. 30(11), 4114–4128 (2020)CrossRef
go back to reference K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in ICLR 2015: International Conference on Learning Representations (2015) K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in ICLR 2015: International Conference on Learning Representations (2015)
go back to reference C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2015, 1–9 (2015) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2015, 1–9 (2015)
go back to reference W. Xue, X. Mou, L. Zhang, A.C. Bovik, X. Feng, Blind image quality assessment using joint statistics of gradient magnitude and laplacian features. IEEE Trans. Image Process. 23(11), 4850–4862 (2014)CrossRef W. Xue, X. Mou, L. Zhang, A.C. Bovik, X. Feng, Blind image quality assessment using joint statistics of gradient magnitude and laplacian features. IEEE Trans. Image Process. 23(11), 4850–4862 (2014)CrossRef
go back to reference B. Yan, B. Bare, W. Tan, Naturalness-aware deep no-reference image quality assessment. IEEE Trans. Multimedia 21(10), 2603–2615 (2019)CrossRef B. Yan, B. Bare, W. Tan, Naturalness-aware deep no-reference image quality assessment. IEEE Trans. Multimedia 21(10), 2603–2615 (2019)CrossRef
go back to reference W. Zhang, K. Ma, G. Zhai, X. Yang, Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans. Image Process. 30, 3474–3486 (2021)CrossRef W. Zhang, K. Ma, G. Zhai, X. Yang, Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans. Image Process. 30, 3474–3486 (2021)CrossRef
go back to reference W. Zhang, G. Zhai, Y. Wei, X. Yang, K. Ma, Blind image quality assessment via vision-language correspondence: a multitask learning perspective (2023) W. Zhang, G. Zhai, Y. Wei, X. Yang, K. Ma, Blind image quality assessment via vision-language correspondence: a multitask learning perspective (2023)
go back to reference K. Zhao, K. Yuan, M. Sun, M. Li, X. Wen, Quality-aware Pre-trained Models for Blind Image Quality Assessment, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023) K. Zhao, K. Yuan, M. Sun, M. Li, X. Wen, Quality-aware Pre-trained Models for Blind Image Quality Assessment, in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Metadata
Title
An Efficient Approach for No Reference Image Quality Assessment (NR-IQA) Index Using Autoencoder-Based Regression Model (ARM)
Authors
Milind S. Patil
Pradip B. Mane
Copyright Year
2024
DOI
https://doi.org/10.1007/978-3-031-68038-0_16