Skip to main content

08.04.2024

CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement

Erschienen in: International Journal of Computer Vision

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Low-light environments introduce various complex degradations into captured images. Retinex-based methods have demonstrated effective enhancement performance by decomposing an image into illumination and reflectance, allowing for selective adjustment and removal of degradations. However, different types of pollutions in reflectance are often treated together. The absence of explicit distinction and definition of various pollution types results in residual pollutions in the results. Typically, the color shift, which is generally spatially invariant, differs from other spatially variant pollution and proves challenging to eliminate with denoising methods. The remaining color shift compromises color constancy both theoretically and in practice. In this paper, we consider different manifestations of degradations and further decompose them. We propose a color-shift aware Retinex model, termed as CRetinex, which decomposes an image into reflectance, color shift, and illumination. Specific networks are designed to remove spatially variant pollution, correct color shift, and adjust illumination separately. Comparative experiments with the state-of-the-art demonstrate the qualitative and quantitative superiority of our approach. Furthermore, extensive experiments on multiple datasets, including real and synthetic images, along with extended validation, confirm the effectiveness of color-shift aware decomposition and the generalization of CRetinex over a wide range of low-light levels.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Banik, P. P., Saha, R., & Kim, K. D. (2018). Contrast enhancement of low-light image using histogram equalization and illumination adjustment. In Proceedings of the international conference on electronics, information, and communication (ICEIC) (pp. 1–4). Banik, P. P., Saha, R., & Kim, K. D. (2018). Contrast enhancement of low-light image using histogram equalization and illumination adjustment. In Proceedings of the international conference on electronics, information, and communication (ICEIC) (pp. 1–4).
Zurück zum Zitat Cai, R., & Chen, Z. (2023). Brain-like retinex: A biologically plausible retinex algorithm for low light image enhancement. Pattern Recognition, 136, 109,195.CrossRef Cai, R., & Chen, Z. (2023). Brain-like retinex: A biologically plausible retinex algorithm for low light image enhancement. Pattern Recognition, 136, 109,195.CrossRef
Zurück zum Zitat Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 3291–3300). Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 3291–3300).
Zurück zum Zitat Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2006). Image denoising with block-matching and 3d filtering. In Image processing: Algorithms and systems, neural networks, and machine learning (Vol. 6064, pp. 354–365). Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2006). Image denoising with block-matching and 3d filtering. In Image processing: Algorithms and systems, neural networks, and machine learning (Vol. 6064, pp. 354–365).
Zurück zum Zitat Fu, X., Zeng, D., Huang, Y., Zhang, X. P., & Ding, X. (2016). A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 2782–2790). Fu, X., Zeng, D., Huang, Y., Zhang, X. P., & Ding, X. (2016). A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 2782–2790).
Zurück zum Zitat Gauglitz, S., Höllerer, T., & Turk, M. (2011). Evaluation of interest point detectors and feature descriptors for visual tracking. International Journal of Computer Vision, 94, 335–360.CrossRef Gauglitz, S., Höllerer, T., & Turk, M. (2011). Evaluation of interest point detectors and feature descriptors for visual tracking. International Journal of Computer Vision, 94, 335–360.CrossRef
Zurück zum Zitat Guo, C., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., & Cong, R. (2020). Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 1780–1789). Guo, C., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., & Cong, R. (2020). Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 1780–1789).
Zurück zum Zitat Guo, X., & Hu, Q. (2023). Low-light image enhancement via breaking down the darkness. International Journal of Computer Vision, 131(1), 48–66.CrossRef Guo, X., & Hu, Q. (2023). Low-light image enhancement via breaking down the darkness. International Journal of Computer Vision, 131(1), 48–66.CrossRef
Zurück zum Zitat Guo, X., Li, Y., & Ling, H. (2016). Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2), 982–993.MathSciNetCrossRef Guo, X., Li, Y., & Ling, H. (2016). Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2), 982–993.MathSciNetCrossRef
Zurück zum Zitat Haghighat, M. B. A., Aghagolzadeh, A., & Seyedarabi, H. (2011). A non-reference image fusion metric based on mutual information of image features. Computers & Electrical Engineering, 37(5), 744–756.CrossRef Haghighat, M. B. A., Aghagolzadeh, A., & Seyedarabi, H. (2011). A non-reference image fusion metric based on mutual information of image features. Computers & Electrical Engineering, 37(5), 744–756.CrossRef
Zurück zum Zitat Han, Y., Cai, Y., Cao, Y., & Xu, X. (2013). A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2), 127–135.CrossRef Han, Y., Cai, Y., Cao, Y., & Xu, X. (2013). A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2), 127–135.CrossRef
Zurück zum Zitat Huang, Z., Yang, S., Zhou, M., Li, Z., Gong, Z., & Chen, Y. (2022). Feature map distillation of thin nets for low-resolution object recognition. IEEE Transactions on Image Processing, 31, 1364–1379.CrossRef Huang, Z., Yang, S., Zhou, M., Li, Z., Gong, Z., & Chen, Y. (2022). Feature map distillation of thin nets for low-resolution object recognition. IEEE Transactions on Image Processing, 31, 1364–1379.CrossRef
Zurück zum Zitat Jeong, I., & Lee, C. (2021). An optimization-based approach to gamma correction parameter estimation for low-light image enhancement. Multimedia Tools and Applications, 80, 18027–18042.CrossRef Jeong, I., & Lee, C. (2021). An optimization-based approach to gamma correction parameter estimation for low-light image enhancement. Multimedia Tools and Applications, 80, 18027–18042.CrossRef
Zurück zum Zitat Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., & Wang, Z. (2021). Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30, 2340–2349.CrossRef Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., & Wang, Z. (2021). Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30, 2340–2349.CrossRef
Zurück zum Zitat Jobson, D. J., Rahman, Z. U., & Woodell, G. A. (1997a). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7), 965–976.CrossRef Jobson, D. J., Rahman, Z. U., & Woodell, G. A. (1997a). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7), 965–976.CrossRef
Zurück zum Zitat Jobson, D. J., Rahman, Z. U., & Woodell, G. A. (1997b). Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3), 451–462.CrossRef Jobson, D. J., Rahman, Z. U., & Woodell, G. A. (1997b). Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3), 451–462.CrossRef
Zurück zum Zitat Land, E. H. (1986). An alternative technique for the computation of the designator in the retinex theory of color vision. Proceedings of the National Academy of Sciences, 83(10), 3078–3080.CrossRef Land, E. H. (1986). An alternative technique for the computation of the designator in the retinex theory of color vision. Proceedings of the National Academy of Sciences, 83(10), 3078–3080.CrossRef
Zurück zum Zitat Lee, C., Lee, C., & Kim, C. S. (2013). Contrast enhancement based on layered difference representation of 2d histograms. IEEE Transactions on Image Processing, 22(12), 5372–5384.CrossRef Lee, C., Lee, C., & Kim, C. S. (2013). Contrast enhancement based on layered difference representation of 2d histograms. IEEE Transactions on Image Processing, 22(12), 5372–5384.CrossRef
Zurück zum Zitat Li, C., Anwar, S., & Porikli, F. (2020). Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognition, 98, 107038.CrossRef Li, C., Anwar, S., & Porikli, F. (2020). Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognition, 98, 107038.CrossRef
Zurück zum Zitat Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, 27(6), 2828–2841.MathSciNetCrossRef Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, 27(6), 2828–2841.MathSciNetCrossRef
Zurück zum Zitat Liu, K., Ye, Z., Guo, H., Cao, D., Chen, L., & Wang, F. Y. (2021). FISS GAN: A generative adversarial network for foggy image semantic segmentation. IEEE/CAA Journal of Automatica Sinica, 8(8), 1428–1439.CrossRef Liu, K., Ye, Z., Guo, H., Cao, D., Chen, L., & Wang, F. Y. (2021). FISS GAN: A generative adversarial network for foggy image semantic segmentation. IEEE/CAA Journal of Automatica Sinica, 8(8), 1428–1439.CrossRef
Zurück zum Zitat Lu, K., & Zhang, L. (2021). TBEFN: A two-branch exposure-fusion network for low-light image enhancement. IEEE Transactions on Multimedia, 23, 4093–4105.CrossRef Lu, K., & Zhang, L. (2021). TBEFN: A two-branch exposure-fusion network for low-light image enhancement. IEEE Transactions on Multimedia, 23, 4093–4105.CrossRef
Zurück zum Zitat Lv, F., Li, Y., & Lu, F. (2021). Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7), 2175–2193.CrossRef Lv, F., Li, Y., & Lu, F. (2021). Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7), 2175–2193.CrossRef
Zurück zum Zitat Ma, L., Ma, T., Liu, R., Fan, X., & Luo, Z. (2022). Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5637–5646). Ma, L., Ma, T., Liu, R., Fan, X., & Luo, Z. (2022). Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5637–5646).
Zurück zum Zitat Moorthy, A. K., & Bovik, A. C. (2010). A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 17(5), 513–516.CrossRef Moorthy, A. K., & Bovik, A. C. (2010). A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 17(5), 513–516.CrossRef
Zurück zum Zitat Ni, Z., Yang, W., Wang, H., Wang, S., Ma, L., & Kwong, S. (2022). Cycle-interactive generative adversarial network for robust unsupervised low-light enhancement. In Proceedings of the ACM international conference on multimedia (pp. 1484–1492). Ni, Z., Yang, W., Wang, H., Wang, S., Ma, L., & Kwong, S. (2022). Cycle-interactive generative adversarial network for robust unsupervised low-light enhancement. In Proceedings of the ACM international conference on multimedia (pp. 1484–1492).
Zurück zum Zitat Peng, L., Zhu, C., & Bian, L. (2023). U-shape transformer for underwater image enhancement. In Proceedings of the European conference on computer vision workshops (pp. 290–307). Peng, L., Zhu, C., & Bian, L. (2023). U-shape transformer for underwater image enhancement. In Proceedings of the European conference on computer vision workshops (pp. 290–307).
Zurück zum Zitat Pizer, S. M. (1990). Contrast-limited adaptive histogram equalization: Speed and effectiveness Stephen M. Pizer, R. Eugene Johnston, James P. Ericksen, Bonnie C. Yankaskas, Keith E. Muller medical image display research group. In Proceedings of the first conference on visualization in biomedical computing (Vol. 337, p. 1). Pizer, S. M. (1990). Contrast-limited adaptive histogram equalization: Speed and effectiveness Stephen M. Pizer, R. Eugene Johnston, James P. Ericksen, Bonnie C. Yankaskas, Keith E. Muller medical image display research group. In Proceedings of the first conference on visualization in biomedical computing (Vol. 337, p. 1).
Zurück zum Zitat Schettini, R., & Corchs, S. (2010). Underwater image processing: State of the art of restoration and image enhancement methods. EURASIP Journal on Advances in Signal Processing, 2010, 1–14.CrossRef Schettini, R., & Corchs, S. (2010). Underwater image processing: State of the art of restoration and image enhancement methods. EURASIP Journal on Advances in Signal Processing, 2010, 1–14.CrossRef
Zurück zum Zitat Wang, R., Zhang, Q., Fu, C. W., Shen, X., Zheng, W. S., & Jia, J. (2019). Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 6849–6857). Wang, R., Zhang, Q., Fu, C. W., Shen, X., Zheng, W. S., & Jia, J. (2019). Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 6849–6857).
Zurück zum Zitat Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., & Lu, T. (2023). Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI conference on artificial intelligence (AAAI). Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., & Lu, T. (2023). Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI conference on artificial intelligence (AAAI).
Zurück zum Zitat Wang, Y., Wan, R., Yang, W., Li, H., Chau, L. P., & Kot, A. (2022). Low-light image enhancement with normalizing flow. In Proceedings of the AAAI conference on artificial intelligence (AAAI) (Vol. 36, pp. 2604–2612). Wang, Y., Wan, R., Yang, W., Li, H., Chau, L. P., & Kot, A. (2022). Low-light image enhancement with normalizing flow. In Proceedings of the AAAI conference on artificial intelligence (AAAI) (Vol. 36, pp. 2604–2612).
Zurück zum Zitat Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.CrossRef Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.CrossRef
Zurück zum Zitat Wei, C., Wang, W., Yang, W., & Liu, J. (2018). Deep retinex decomposition for low-light enhancement. In British machine vision conference. Wei, C., Wang, W., Yang, W., & Liu, J. (2018). Deep retinex decomposition for low-light enhancement. In British machine vision conference.
Zurück zum Zitat Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., & Jiang, J. (2022). Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5901–5910). Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., & Jiang, J. (2022). Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5901–5910).
Zurück zum Zitat Xie, Z., Geng, Z., Hu, J., Zhang, Z., Hu, H., & Cao, Y. (2023). Revealing the dark secrets of masked image modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 14,475–14,485). Xie, Z., Geng, Z., Hu, J., Zhang, Z., Hu, H., & Cao, Y. (2023). Revealing the dark secrets of masked image modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 14,475–14,485).
Zurück zum Zitat Xu, G., Wang, X., & Xu, X. (2020). Single image enhancement in sandstorm weather via tensor least square. IEEE/CAA Journal of Automatica Sinica, 7(6), 1649–1661.MathSciNetCrossRef Xu, G., Wang, X., & Xu, X. (2020). Single image enhancement in sandstorm weather via tensor least square. IEEE/CAA Journal of Automatica Sinica, 7(6), 1649–1661.MathSciNetCrossRef
Zurück zum Zitat Yang, W., Wang, W., Huang, H., Wang, S., & Liu, J. (2021). Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing, 30, 2072–2086.CrossRef Yang, W., Wang, W., Huang, H., Wang, S., & Liu, J. (2021). Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing, 30, 2072–2086.CrossRef
Zurück zum Zitat Zhang, H., & Ma, J. (2021). SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. International Journal of Computer Vision, 129, 2761–2785.CrossRef Zhang, H., & Ma, J. (2021). SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. International Journal of Computer Vision, 129, 2761–2785.CrossRef
Zurück zum Zitat Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 586–595). Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 586–595).
Zurück zum Zitat Zhang, Y., Guo, X., Ma, J., Liu, W., & Zhang, J. (2021). Beyond brightening low-light images. International Journal of Computer Vision, 129, 1013–1037.CrossRef Zhang, Y., Guo, X., Ma, J., Liu, W., & Zhang, J. (2021). Beyond brightening low-light images. International Journal of Computer Vision, 129, 1013–1037.CrossRef
Zurück zum Zitat Zhang, Y., Zhang, J., & Guo, X. (2019). Kindling the darkness: A practical low-light image enhancer. In Proceedings of the ACM international conference on multimedia (pp. 1632–1640). Zhang, Y., Zhang, J., & Guo, X. (2019). Kindling the darkness: A practical low-light image enhancer. In Proceedings of the ACM international conference on multimedia (pp. 1632–1640).
Zurück zum Zitat Zhuang, P., Wu, J., Porikli, F., & Li, C. (2022). Underwater image enhancement with hyper-Laplacian reflectance priors. IEEE Transactions on Image Processing, 31, 5442–5455.CrossRef Zhuang, P., Wu, J., Porikli, F., & Li, C. (2022). Underwater image enhancement with hyper-Laplacian reflectance priors. IEEE Transactions on Image Processing, 31, 5442–5455.CrossRef
Metadaten
Titel
CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement
Publikationsdatum
08.04.2024
Erschienen in
International Journal of Computer Vision
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-024-02065-z

Premium Partner