Skip to main content
Erschienen in: International Journal of Computer Vision 4/2021

11.01.2021

Benchmarking Low-Light Image Enhancement and Beyond

verfasst von: Jiaying Liu, Dejia Xu, Wenhan Yang, Minhao Fan, Haofeng Huang

Erschienen in: International Journal of Computer Vision | Ausgabe 4/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we present a systematic review and evaluation of existing single-image low-light enhancement algorithms. Besides the commonly used low-level vision oriented evaluations, we additionally consider measuring machine vision performance in the low-light condition via face detection task to explore the potential of joint optimization of high-level and low-level vision enhancement. To this end, we first propose a large-scale low-light image dataset serving both low/high-level vision with diversified scenes and contents as well as complex degradation in real scenarios, called Vision Enhancement in the LOw-Light condition (VE-LOL). Beyond paired low/normal-light images without annotations, we additionally include the analysis resource related to human, i.e. face images in the low-light condition with annotated face bounding boxes. Then, efforts are made on benchmarking from the perspective of both human and machine visions. A rich variety of criteria is used for the low-level vision evaluation, including full-reference, no-reference, and semantic similarity metrics. We also measure the effects of the low-light enhancement on face detection in the low-light condition. State-of-the-art face detection methods are used in the evaluation. Furthermore, with the rich material of VE-LOL, we explore the novel problem of joint low-light enhancement and face detection. We develop an enhanced face detector to apply low-light enhancement and face detection jointly. The features extracted by the enhancement module are fed to the successive layer with the same resolution of the detection module. Thus, these features are intertwined together to unitedly learn useful information across two phases, i.e. enhancement and detection. Experiments on VE-LOL provide a comparison of state-of-the-art low-light enhancement algorithms, point out their limitations, and suggest promising future directions. Our dataset has supported the Track “Face Detection in Low Light Conditions” of CVPR UG2+ Challenge (2019–2020) (http://​cvpr2020.​ug2challenge.​org/​).

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Abdullah-Al-Wadud, M., Kabir, M. H., Dewan, M. A. A., & Chae, O. (2007). A dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(2), 593–600.CrossRef Abdullah-Al-Wadud, M., Kabir, M. H., Dewan, M. A. A., & Chae, O. (2007). A dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(2), 593–600.CrossRef
Zurück zum Zitat Anaya, J., & Barbu, A. (2018). Renoir: A dataset for real low-light image noise reduction. Journal of Visual Communication and Image Representation, 51, 144–154.CrossRef Anaya, J., & Barbu, A. (2018). Renoir: A dataset for real low-light image noise reduction. Journal of Visual Communication and Image Representation, 51, 144–154.CrossRef
Zurück zum Zitat Arici, T., Dikbas, S., & Altunbasak, Y. (2009). A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on Image Processing, 18(9), 1921–1935.MathSciNetCrossRef Arici, T., Dikbas, S., & Altunbasak, Y. (2009). A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on Image Processing, 18(9), 1921–1935.MathSciNetCrossRef
Zurück zum Zitat Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., & Barron, J. T. (2019). Unprocessing images for learned raw denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 11028–11037. Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., & Barron, J. T. (2019). Unprocessing images for learned raw denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 11028–11037.
Zurück zum Zitat Buslaev, A., Parinov, A., Khvedchenya, E., Iglovikov, V. I., & Kalinin, A. A. (2018). Albumentations: Fast and flexible image augmentations. arXiv preprint arXiv:180906839 Buslaev, A., Parinov, A., Khvedchenya, E., Iglovikov, V. I., & Kalinin, A. A. (2018). Albumentations: Fast and flexible image augmentations. arXiv preprint arXiv:​180906839
Zurück zum Zitat Cai, B., Xu, X., Guo, K., Jia, K., Hu, B., & Tao, D. (2017). A joint intrinsic-extrinsic prior model for retinex. In Proceedings of the IEEE international conference on computer vision, pp. 4020–4029. Cai, B., Xu, X., Guo, K., Jia, K., Hu, B., & Tao, D. (2017). A joint intrinsic-extrinsic prior model for retinex. In Proceedings of the IEEE international conference on computer vision, pp. 4020–4029.
Zurück zum Zitat Cai, J., Gu, S., & Zhang, L. (2018). Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, 27(4), 2049–2062.MathSciNetCrossRef Cai, J., Gu, S., & Zhang, L. (2018). Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing, 27(4), 2049–2062.MathSciNetCrossRef
Zurück zum Zitat Celik, T., & Tjahjadi, T. (2011). Contextual and variational contrast enhancement. IEEE Transactions on Image Processing, 20(12), 3431–3441.MathSciNetCrossRef Celik, T., & Tjahjadi, T. (2011). Contextual and variational contrast enhancement. IEEE Transactions on Image Processing, 20(12), 3431–3441.MathSciNetCrossRef
Zurück zum Zitat Chang, Y., & Jung, C. (2016). Perceptual contrast enhancement of dark images based on textural coefficients. In Proceedings of IEEE visual communication and image processing, pp. 1–4. Chang, Y., & Jung, C. (2016). Perceptual contrast enhancement of dark images based on textural coefficients. In Proceedings of IEEE visual communication and image processing, pp. 1–4.
Zurück zum Zitat Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 3291–3300. Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 3291–3300.
Zurück zum Zitat Chen, C., Chen, Q., Do, M., & Koltun, V. (2019). Seeing motion in the dark. In Proceedings of IEEE international conference on computer vision, pp. 3184–3193. Chen, C., Chen, Q., Do, M., & Koltun, V. (2019). Seeing motion in the dark. In Proceedings of IEEE international conference on computer vision, pp. 3184–3193.
Zurück zum Zitat Chen, X., Zhang, Q., Lin, M., Yang, G., & He, C. (2018). No-reference color image quality assessment: From entropy to perceptual quality. arXiv preprint arXiv:181210695 Chen, X., Zhang, Q., Lin, M., Yang, G., & He, C. (2018). No-reference color image quality assessment: From entropy to perceptual quality. arXiv preprint arXiv:​181210695
Zurück zum Zitat Chen, Z., Abidi, B. R., Page, D. L., & Abidi, M. A. (2006). Gray-level grouping (glg): An automatic method for optimized image contrast enhancement—part II: the variations. IEEE Transactions on Image Processing, 15(8), 2303–2314.CrossRef Chen, Z., Abidi, B. R., Page, D. L., & Abidi, M. A. (2006). Gray-level grouping (glg): An automatic method for optimized image contrast enhancement—part II: the variations. IEEE Transactions on Image Processing, 15(8), 2303–2314.CrossRef
Zurück zum Zitat Chi, C., Zhang, S., Xing, J., Lei, Z., Li, S. Z,, & Zou, X. (2018). Selective refinement network for high performance face detection. arXiv preprint arXiv:180902693 Chi, C., Zhang, S., Xing, J., Lei, Z., Li, S. Z,, & Zou, X. (2018). Selective refinement network for high performance face detection. arXiv preprint arXiv:​180902693
Zurück zum Zitat Dai, D., & Gool, L. V. (2018). Dark model adaptation: Semantic image segmentation from daytime to nighttime. In International conference on intelligent transportation systems, pp. 3819–3824. Dai, D., & Gool, L. V. (2018). Dark model adaptation: Semantic image segmentation from daytime to nighttime. In International conference on intelligent transportation systems, pp. 3819–3824.
Zurück zum Zitat Dang-Nguyen, D. T., Pasquini, C., Conotter, V., & Boato, G. (2015). Raise: A raw images dataset for digital image forensics. In Proceedings of ACM multimedia systems conference, pp. 219–224. Dang-Nguyen, D. T., Pasquini, C., Conotter, V., & Boato, G. (2015). Raise: A raw images dataset for digital image forensics. In Proceedings of ACM multimedia systems conference, pp. 219–224.
Zurück zum Zitat Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., Lu, Y. (2011). Fast efficient algorithm for enhancement of low lighting video. In Proceedings of IEEE international conference multimedia and expo, pp 1–6. Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., Lu, Y. (2011). Fast efficient algorithm for enhancement of low lighting video. In Proceedings of IEEE international conference multimedia and expo, pp 1–6.
Zurück zum Zitat Fan, M., Wang, W., Yang, W., & Liu, J. (2020). Integrating semantic segmentation and retinex model for low-light image enhancement. In ACM transactions on multimedia, pp. 2317–2325. Fan, M., Wang, W., Yang, W., & Liu, J. (2020). Integrating semantic segmentation and retinex model for low-light image enhancement. In ACM transactions on multimedia, pp. 2317–2325.
Zurück zum Zitat Fu, X., Sun, Y., LiWang, M., Huang, Y., Zhang, X., & Ding, X. (2014). A novel retinex based approach for image enhancement with illumination adjustment. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 1190–1194. Fu, X., Sun, Y., LiWang, M., Huang, Y., Zhang, X., & Ding, X. (2014). A novel retinex based approach for image enhancement with illumination adjustment. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 1190–1194.
Zurück zum Zitat Fu, X., Zeng, D., Huang, Y., Liao, Y., Ding, X., & Paisley, J. (2016). A fusion-based enhancing method for weakly illuminated images. Signal Processing, 129, 82–96.CrossRef Fu, X., Zeng, D., Huang, Y., Liao, Y., Ding, X., & Paisley, J. (2016). A fusion-based enhancing method for weakly illuminated images. Signal Processing, 129, 82–96.CrossRef
Zurück zum Zitat Fu, X., Zeng,,D., Huang, Y., Zhang, X., Ding, X. (2016). A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 2782–2790. Fu, X., Zeng,,D., Huang, Y., Zhang, X., Ding, X. (2016). A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 2782–2790.
Zurück zum Zitat Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In AISTATS, JMLR, 9, 249–256. Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In AISTATS, JMLR, 9, 249–256.
Zurück zum Zitat Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courvillem, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courvillem, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680.
Zurück zum Zitat Guo, C.G., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R. (2020). Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 1780–1789. Guo, C.G., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R. (2020). Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 1780–1789.
Zurück zum Zitat Guo, X., Li, Y., & Ling, H. (2017). Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2), 982–993.MathSciNetCrossRef Guo, X., Li, Y., & Ling, H. (2017). Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2), 982–993.MathSciNetCrossRef
Zurück zum Zitat Hordley, S. D., & Finlayson, G. D. (2004). Re-evaluating colour constancy algorithms. In Proceedings of IEEE international conference on pattern recognition, 1, 76–79. Hordley, S. D., & Finlayson, G. D. (2004). Re-evaluating colour constancy algorithms. In Proceedings of IEEE international conference on pattern recognition, 1, 76–79.
Zurück zum Zitat Hwang, S., Park, J., Kim, N., Choi, Y., & So Kweon, I. (2015). Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 1037–1045. Hwang, S., Park, J., Kim, N., Choi, Y., & So Kweon, I. (2015). Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 1037–1045.
Zurück zum Zitat Ibrahim, H., & Pik Kong, N. S. (2007). Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(4), 1752–1758.CrossRef Ibrahim, H., & Pik Kong, N. S. (2007). Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(4), 1752–1758.CrossRef
Zurück zum Zitat Jiang, H., & Learned-Miller, E. G. (2017). Face detection with the faster R-CNN. In IEEE international conference on automatic face and gesture recognition, pp. 650–657. Jiang, H., & Learned-Miller, E. G. (2017). Face detection with the faster R-CNN. In IEEE international conference on automatic face and gesture recognition, pp. 650–657.
Zurück zum Zitat Jiang, H., & Zheng, Y. (2019). Learning to see moving objects in the dark. In Proceedings of IEEE international conference on computer vision, pp. 7323–7332. Jiang, H., & Zheng, Y. (2019). Learning to see moving objects in the dark. In Proceedings of IEEE international conference on computer vision, pp. 7323–7332.
Zurück zum Zitat Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., & Wang, Z. (2019). EnlightenGAN: Deep light enhancement without paired supervision. arXiv e-prints arXiv:1906.06972 Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., & Wang, Z. (2019). EnlightenGAN: Deep light enhancement without paired supervision. arXiv e-prints arXiv:​1906.​06972
Zurück zum Zitat Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997a). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7), 965–976.CrossRef Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997a). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7), 965–976.CrossRef
Zurück zum Zitat Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997b). Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3), 451–462.CrossRef Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997b). Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3), 451–462.CrossRef
Zurück zum Zitat Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In Proceedings of IEEE European conference on computer vision. Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In Proceedings of IEEE European conference on computer vision.
Zurück zum Zitat Kim, G., & Kwon, J. (2019). LED2Net: Deep illumination-aware Dehazing with low-light and detail enhancement. arXiv e-prints arXiv:1906.05119 Kim, G., & Kwon, J. (2019). LED2Net: Deep illumination-aware Dehazing with low-light and detail enhancement. arXiv e-prints arXiv:​1906.​05119
Zurück zum Zitat Kim, G., Kwon, D., & Kwon, J. (2019). Low-lightgan: Low-light enhancement via advanced generative adversarial network with task-driven training. In Proceedings of IEEE international conference on image processing, pp. 2811–2815. Kim, G., Kwon, D., & Kwon, J. (2019). Low-lightgan: Low-light enhancement via advanced generative adversarial network with task-driven training. In Proceedings of IEEE international conference on image processing, pp. 2811–2815.
Zurück zum Zitat Land, E., & McCann, J. (1971) Lightness and retinex theory. Journal of the Optical Society of America, pp. 1–11. Land, E., & McCann, J. (1971) Lightness and retinex theory. Journal of the Optical Society of America, pp. 1–11.
Zurück zum Zitat Land, E. H. (1977). The retinex theory of color vision. Scientific American, pp. 108–128. Land, E. H. (1977). The retinex theory of color vision. Scientific American, pp. 108–128.
Zurück zum Zitat Lee, C., Lee, C., & Kim, C. S. (2013a). Contrast enhancement based on layered difference representation of 2D histograms. IEEE Transactions on Image Processing, 22(12), 5372–5384.CrossRef Lee, C., Lee, C., & Kim, C. S. (2013a). Contrast enhancement based on layered difference representation of 2D histograms. IEEE Transactions on Image Processing, 22(12), 5372–5384.CrossRef
Zurück zum Zitat Lee, C., Kim, J., Lee, C., & Kim, C. (2014a). Optimized brightness compensation and contrast enhancement for transmissive liquid crystal displays. IEEE Transactions on Circuits and Systems for Video Technology, 24(4), 576–590.MathSciNetCrossRef Lee, C., Kim, J., Lee, C., & Kim, C. (2014a). Optimized brightness compensation and contrast enhancement for transmissive liquid crystal displays. IEEE Transactions on Circuits and Systems for Video Technology, 24(4), 576–590.MathSciNetCrossRef
Zurück zum Zitat Lee, C. H., Shih, J. L., Lien, C.C., & Han, C. C. (2013b). Adaptive multiscale retinex for image contrast enhancement. In Signal-image technology and internet-based systems (SITIS), 2013 international conference on, IEEE, pp. 43–50. Lee, C. H., Shih, J. L., Lien, C.C., & Han, C. C. (2013b). Adaptive multiscale retinex for image contrast enhancement. In Signal-image technology and internet-based systems (SITIS), 2013 international conference on, IEEE, pp. 43–50.
Zurück zum Zitat Lee, J., Lee, C., Sim, J., & Kim, C. (2014b). Depth-guided adaptive contrast enhancement using 2D histograms. In Proceedings of IEEE international conference on image processing, pp. 4527–4531. Lee, J., Lee, C., Sim, J., & Kim, C. (2014b). Depth-guided adaptive contrast enhancement using 2D histograms. In Proceedings of IEEE international conference on image processing, pp. 4527–4531.
Zurück zum Zitat Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., et al. (2019). Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1), 492–505.MathSciNetCrossRef Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., et al. (2019). Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1), 492–505.MathSciNetCrossRef
Zurück zum Zitat Li, J., Wang, Y., Wang, C., Tai, Y., Qian, J., Yang, J., Wang, C., Li, J., & Huang, F. (2019). Dsfd: Dual shot face detector. In Proceedings of IEEE international conference on computer vision and pattern recognition. Li, J., Wang, Y., Wang, C., Tai, Y., Qian, J., Yang, J., Wang, C., Li, J., & Huang, F. (2019). Dsfd: Dual shot face detector. In Proceedings of IEEE international conference on computer vision and pattern recognition.
Zurück zum Zitat Li, L., Wang, R., Wang, W., & Gao, W. (2015). A low-light image enhancement method for both denoising and contrast enlarging. In Proceedings of IEEE international conference on image processing, pp. 3730–3734. Li, L., Wang, R., Wang, W., & Gao, W. (2015). A low-light image enhancement method for both denoising and contrast enlarging. In Proceedings of IEEE international conference on image processing, pp. 3730–3734.
Zurück zum Zitat Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, 27(6), 2828–2841.MathSciNetCrossRef Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, 27(6), 2828–2841.MathSciNetCrossRef
Zurück zum Zitat Li, M., Wu, X., Liu, J., & GUo, Z. (2018). Restoration of unevenly illuminated images. In Proceedings of IEEE international conference on image processing, pp. 1118–1122. Li, M., Wu, X., Liu, J., & GUo, Z. (2018). Restoration of unevenly illuminated images. In Proceedings of IEEE international conference on image processing, pp. 1118–1122.
Zurück zum Zitat Liang, Z., Liu, W., & Yao, R. (2016). Contrast enhancement by nonlinear diffusion filtering. IEEE Transactions on Image Processing, 25(2), 673–686.MathSciNetCrossRef Liang, Z., Liu, W., & Yao, R. (2016). Contrast enhancement by nonlinear diffusion filtering. IEEE Transactions on Image Processing, 25(2), 673–686.MathSciNetCrossRef
Zurück zum Zitat Lim, J., Kim, J., Sim, J., & Kim, C. (2015). Robust contrast enhancement of noisy low-light images: Denoising-enhancement-completion. In Proceedings of IEEE international conference on image processing, pp. 4131–4135. Lim, J., Kim, J., Sim, J., & Kim, C. (2015). Robust contrast enhancement of noisy low-light images: Denoising-enhancement-completion. In Proceedings of IEEE international conference on image processing, pp. 4131–4135.
Zurück zum Zitat Liu, L., Liu, B., Huang, H., & Bovik, A. C. (2014). No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8), 856–863. Liu, L., Liu, B., Huang, H., & Bovik, A. C. (2014). No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8), 856–863.
Zurück zum Zitat Liu, X., Cheung, G., Wu, X. (2015). Joint denoising and contrast enhancement of images using graph laplacian operator. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 2274–2278. Liu, X., Cheung, G., Wu, X. (2015). Joint denoising and contrast enhancement of images using graph laplacian operator. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 2274–2278.
Zurück zum Zitat Loh, Y. P., & Chan, C. S. (2019). Getting to know low-light images with the exclusively dark dataset. Computer Vision and Image Understanding. Loh, Y. P., & Chan, C. S. (2019). Getting to know low-light images with the exclusively dark dataset. Computer Vision and Image Understanding.
Zurück zum Zitat Lore, K. G., Akintayo, A., & Sarkar, S. (2017). Llnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61, 650–662.CrossRef Lore, K. G., Akintayo, A., & Sarkar, S. (2017). Llnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61, 650–662.CrossRef
Zurück zum Zitat Lv, F., Lu, F., Wu, J., & Lim, C. (2018). Mbllen: Low-light image/video enhancement using CNNs. In Proceedings of British machine vision conference. Lv, F., Lu, F., Wu, J., & Lim, C. (2018). Mbllen: Low-light image/video enhancement using CNNs. In Proceedings of British machine vision conference.
Zurück zum Zitat Ma, K., Zeng, K., & Wang, Z. (2015). Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11), 3345–3356.MathSciNetCrossRef Ma, K., Zeng, K., & Wang, Z. (2015). Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11), 3345–3356.MathSciNetCrossRef
Zurück zum Zitat Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of IEEE international conference on computer vision, 2, 416–423. Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of IEEE international conference on computer vision, 2, 416–423.
Zurück zum Zitat Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12), 4695–4708.MathSciNetCrossRef Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12), 4695–4708.MathSciNetCrossRef
Zurück zum Zitat Mittal, A., Soundararajan, R., & Bovik, A. C. (2013). Making a completely blind image quality analyzer. IEEE Signal Processing Letters, 20, 209–212.CrossRef Mittal, A., Soundararajan, R., & Bovik, A. C. (2013). Making a completely blind image quality analyzer. IEEE Signal Processing Letters, 20, 209–212.CrossRef
Zurück zum Zitat Nada, H., Sindagi, V.A., Zhang, H., & Patel, V. M. (2018). Pushing the limits of unconstrained face detection: A challenge dataset and baseline results. In IEEE international conference on biometrics: Theory, applications, and systems. Nada, H., Sindagi, V.A., Zhang, H., & Patel, V. M. (2018). Pushing the limits of unconstrained face detection: A challenge dataset and baseline results. In IEEE international conference on biometrics: Theory, applications, and systems.
Zurück zum Zitat Najibi, M., Samangouei, P., Chellappa, R., & Davis, L. S. (2017). Ssh: Single stage headless face detector. In Proceedings of IEEE international conference on computer vision, pp. 4885–4894. Najibi, M., Samangouei, P., Chellappa, R., & Davis, L. S. (2017). Ssh: Single stage headless face detector. In Proceedings of IEEE international conference on computer vision, pp. 4885–4894.
Zurück zum Zitat Nakai, K., Hoshi, Y., & Taguchi, A. (2013). Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In International symposium on intelligent signal processing and communications systems, pp. 445–449. Nakai, K., Hoshi, Y., & Taguchi, A. (2013). Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In International symposium on intelligent signal processing and communications systems, pp. 445–449.
Zurück zum Zitat Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., & Lerer, A. (2017). Automatic differentiation in PyTorch. In NIPS autodiff workshop. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., & Lerer, A. (2017). Automatic differentiation in PyTorch. In NIPS autodiff workshop.
Zurück zum Zitat Pierre, F., Aujol, J., Bugeau, A., Steidl, G., & Ta, V. (2016). Hue-preserving perceptual contrast enhancement. In Proceedings of IEEE international conference on image processing, pp. 4067–4071. Pierre, F., Aujol, J., Bugeau, A., Steidl, G., & Ta, V. (2016). Hue-preserving perceptual contrast enhancement. In Proceedings of IEEE international conference on image processing, pp. 4067–4071.
Zurück zum Zitat Pizer, S. M., Johnston, R. E., Ericksen, J. P., Yankaskas, B. C., & Muller, K. E. (1990). Contrast-limited adaptive histogram equalization: Speed and effectiveness. In Proceedings of conference on visualization in biomedical computing, pp. 337–345. Pizer, S. M., Johnston, R. E., Ericksen, J. P., Yankaskas, B. C., & Muller, K. E. (1990). Contrast-limited adaptive histogram equalization: Speed and effectiveness. In Proceedings of conference on visualization in biomedical computing, pp. 337–345.
Zurück zum Zitat Ren, W., Liu, S., Ma, L., Xu, Q., Xu, X., Cao, X., et al. (2019). Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing, 28(9), 4364–4375.MathSciNetCrossRef Ren, W., Liu, S., Ma, L., Xu, Q., Xu, X., Cao, X., et al. (2019). Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing, 28(9), 4364–4375.MathSciNetCrossRef
Zurück zum Zitat Ren, X., Li, M., Cheng, W. H., & Liu, J. (2018). Joint enhancement and denoising method via sequential decomposition. In IEEE international symposium on circuits and systems. Ren, X., Li, M., Cheng, W. H., & Liu, J. (2018). Joint enhancement and denoising method via sequential decomposition. In IEEE international symposium on circuits and systems.
Zurück zum Zitat Saad, M. A., Bovik, A. C., & Charrier, C. (2011). Dct statistics model-based blind image quality assessment. In IEEE international conference on image processing (pp. 3093–3096). IEEE. Saad, M. A., Bovik, A. C., & Charrier, C. (2011). Dct statistics model-based blind image quality assessment. In IEEE international conference on image processing (pp. 3093–3096). IEEE.
Zurück zum Zitat Sakaridis, C., Dai, D., & Van Gool, L. (2019). Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. In Proceedings of the IEEE international conference on computer vision, pp. 7373–7382. Sakaridis, C., Dai, D., & Van Gool, L. (2019). Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. In Proceedings of the IEEE international conference on computer vision, pp. 7373–7382.
Zurück zum Zitat Sakaridis, C., Dai, D., Van Gool, L. (2020). Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. arXiv e-prints arXiv:2005.14553 Sakaridis, C., Dai, D., Van Gool, L. (2020). Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. arXiv e-prints arXiv:​2005.​14553
Zurück zum Zitat Sasagawa, Y., & Nagahara, H. (2020). Yolo in the dark-domain adaptation method for merging multiple models. In Proceedings of IEEE European conference on computer vision, pp. 345–359. Sasagawa, Y., & Nagahara, H. (2020). Yolo in the dark-domain adaptation method for merging multiple models. In Proceedings of IEEE European conference on computer vision, pp. 345–359.
Zurück zum Zitat Schaefer, G., & Stich, M. (2004). Ucid: An uncompressed colour image database. In Storage and retrieval methods and applications for multimedia, proceedings of SPIE, 5307, 472–480. Schaefer, G., & Stich, M. (2004). Ucid: An uncompressed colour image database. In Storage and retrieval methods and applications for multimedia, proceedings of SPIE, 5307, 472–480.
Zurück zum Zitat Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on Image Processing, 15(2), 430–444.CrossRef Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on Image Processing, 15(2), 430–444.CrossRef
Zurück zum Zitat Shen, L., Yue, Z., Feng, F., Chen, Q., Liu, S., & Ma, J. (2017). MSR-net: Low-light image enhancement using deep convolutional network. ArXiv e-prints. Shen, L., Yue, Z., Feng, F., Chen, Q., Liu, S., & Ma, J. (2017). MSR-net: Low-light image enhancement using deep convolutional network. ArXiv e-prints.
Zurück zum Zitat Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. In Proceedings of international conference on learning representations. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. In Proceedings of international conference on learning representations.
Zurück zum Zitat Su, H., & Jung, C. (2017). Low light image enhancement based on two-step noise suppression. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 1977–1981. Su, H., & Jung, C. (2017). Low light image enhancement based on two-step noise suppression. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 1977–1981.
Zurück zum Zitat Tan, P. N., Steinbach, M., & Kumar, V. (2005). Introduction to data mining (1st ed.). Boston, MA: Addison-Wesley Longman Publishing Co., Inc. Tan, P. N., Steinbach, M., & Kumar, V. (2005). Introduction to data mining (1st ed.). Boston, MA: Addison-Wesley Longman Publishing Co., Inc.
Zurück zum Zitat Tang, X., Du, D. K., He, Z., & Liu, J. (2018). Pyramidbox: A context-assisted single shot face detector. In Proceedings of IEEE European conference on computer vision. Tang, X., Du, D. K., He, Z., & Liu, J. (2018). Pyramidbox: A context-assisted single shot face detector. In Proceedings of IEEE European conference on computer vision.
Zurück zum Zitat Tao, L., Zhu, C., Xiang, G., Li, Y., Jia, H., & Xie, X. (2017). Llcnn: A convolutional neural network for low-light image enhancement. In Proceedings of IEEE visual communication and image processing, pp. 1–4. Tao, L., Zhu, C., Xiang, G., Li, Y., Jia, H., & Xie, X. (2017). Llcnn: A convolutional neural network for low-light image enhancement. In Proceedings of IEEE visual communication and image processing, pp. 1–4.
Zurück zum Zitat Tieleman, T., & Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. Tech. rep. Tieleman, T., & Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. Tech. rep.
Zurück zum Zitat Vonikakis, V., Chrysostomou, D., Kouskouridas, R., & Gasteratos, A. (2013). A biologically inspired scale-space for illumination invariant feature detection. Measurement Science and Technology. Vonikakis, V., Chrysostomou, D., Kouskouridas, R., & Gasteratos, A. (2013). A biologically inspired scale-space for illumination invariant feature detection. Measurement Science and Technology.
Zurück zum Zitat Vonikakis, V., Kouskouridas, R., & Gasteratos, A. (2018). On the evaluation of illumination compensation algorithms. Multimedia Tools and Appllications, 77(8), 9211–9231. Vonikakis, V., Kouskouridas, R., & Gasteratos, A. (2018). On the evaluation of illumination compensation algorithms. Multimedia Tools and Appllications, 77(8), 9211–9231.
Zurück zum Zitat Wang, F., Chen, L., Li, C., Huang, S., Chen, Y., Qian, C., & Change Loy, C. (2018). The devil of face recognition is in the noise. In Proceedings of IEEE European conference on computer vision, pp. 765–780. Wang, F., Chen, L., Li, C., Huang, S., Chen, Y., Qian, C., & Change Loy, C. (2018). The devil of face recognition is in the noise. In Proceedings of IEEE European conference on computer vision, pp. 765–780.
Zurück zum Zitat Wang, L., Xiao, L., Liu, H., & Wei, Z. (2014). Variational bayesian method for retinex. IEEE Transactions on Image Processing, 23(8), 3381–3396.MathSciNetCrossRef Wang, L., Xiao, L., Liu, H., & Wei, Z. (2014). Variational bayesian method for retinex. IEEE Transactions on Image Processing, 23(8), 3381–3396.MathSciNetCrossRef
Zurück zum Zitat Wang, R., Zhang, Q., Fu, C. W., Shen, X., Zheng, W. S., & Jia, J. (2019). Underexposed photo enhancement using deep illumination estimation. In Proceedings of IEEE international conference on computer vision and pattern recognition. Wang, R., Zhang, Q., Fu, C. W., Shen, X., Zheng, W. S., & Jia, J. (2019). Underexposed photo enhancement using deep illumination estimation. In Proceedings of IEEE international conference on computer vision and pattern recognition.
Zurück zum Zitat Wang, S., Zheng, J., Hu, H. M., & Li, B. (2013a). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9), 3538–3548.CrossRef Wang, S., Zheng, J., Hu, H. M., & Li, B. (2013a). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9), 3538–3548.CrossRef
Zurück zum Zitat Wang, S., Zheng, J., Hu, H. M., & Li, B. (2013b). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9), 3538–3548.CrossRef Wang, S., Zheng, J., Hu, H. M., & Li, B. (2013b). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9), 3538–3548.CrossRef
Zurück zum Zitat Wang, X., & Zhang, D. (2010). An optimized tongue image color correction scheme. IEEE Transactions on Information Technology in Biomedicine, 14(6), 1355–1364.MathSciNetCrossRef Wang, X., & Zhang, D. (2010). An optimized tongue image color correction scheme. IEEE Transactions on Information Technology in Biomedicine, 14(6), 1355–1364.MathSciNetCrossRef
Zurück zum Zitat Wang, Y., Cao, Y., Zha, Z.J., Zhang, J., Xiong, Z., Zhang, W., Wu, F. (2019). Progressive retinex: Mutually reinforced illumination-noise perception network for low light image enhancement. In ACM transactions on multimedia. Wang, Y., Cao, Y., Zha, Z.J., Zhang, J., Xiong, Z., Zhang, W., Wu, F. (2019). Progressive retinex: Mutually reinforced illumination-noise perception network for low light image enhancement. In ACM transactions on multimedia.
Zurück zum Zitat Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.CrossRef Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.CrossRef
Zurück zum Zitat Wei, C., Wang, W., Yang, W., & Liu, J. (2018). Deep retinex decomposition for low-light enhancement. In British machine vision conference. Wei, C., Wang, W., Yang, W., & Liu, J. (2018). Deep retinex decomposition for low-light enhancement. In British machine vision conference.
Zurück zum Zitat Wu, X., Liu, X., Hiramatsu, K., & Kashino, K. (2017). Contrast-accumulated histogram equalization for image enhancement. In 2017 IEEE international conference on image processing (ICIP), pp. 3190–3194. Wu, X., Liu, X., Hiramatsu, K., & Kashino, K. (2017). Contrast-accumulated histogram equalization for image enhancement. In 2017 IEEE international conference on image processing (ICIP), pp. 3190–3194.
Zurück zum Zitat Xu, J., Ye, P., Li, Q., Du, H., Liu, Y., & Doermann, D. (2016). Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 25(9), 4444–4457.MathSciNetCrossRef Xu, J., Ye, P., Li, Q., Du, H., Liu, Y., & Doermann, D. (2016). Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 25(9), 4444–4457.MathSciNetCrossRef
Zurück zum Zitat Xu, J., Hou, Y., Ren, D., Liu, L., Zhu, F., Yu, M., Wang, H., & Shao, L. (2019). STAR: A structure and texture aware retinex model. arXiv e-prints arXiv:1906.06690 Xu, J., Hou, Y., Ren, D., Liu, L., Zhu, F., Yu, M., Wang, H., & Shao, L. (2019). STAR: A structure and texture aware retinex model. arXiv e-prints arXiv:​1906.​06690
Zurück zum Zitat Xu, K., & Jung, C. (2017). Retinex-based perceptual contrast enhancement in images using luminance adaptation. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 1363–1367. Xu, K., & Jung, C. (2017). Retinex-based perceptual contrast enhancement in images using luminance adaptation. In Proceedings of IEEE international conference on acoustics, speech, and signal processing, pp. 1363–1367.
Zurück zum Zitat Xu, K., Yang, X., Yin, B., & Lau, R. W. (2020). Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of IEEE international conference on computer vision and pattern recognition. Xu, K., Yang, X., Yin, B., & Lau, R. W. (2020). Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of IEEE international conference on computer vision and pattern recognition.
Zurück zum Zitat Yan, W., Tan, R. T., Dai, D. (2020). Nighttime defogging using high-low frequency decomposition and grayscale-color networks. In Proceedings of IEEE European conference on computer vision, pp. 473–488. Yan, W., Tan, R. T., Dai, D. (2020). Nighttime defogging using high-low frequency decomposition and grayscale-color networks. In Proceedings of IEEE European conference on computer vision, pp. 473–488.
Zurück zum Zitat Yang, B., Yan, J., Lei, Z., Li, S. Z. (2015). Fine-grained evaluation on face detection in the wild. In IEEE international conference and workshops on automatic face and gesture recognition, vol. 1, pp. 1–7. Yang, B., Yan, J., Lei, Z., Li, S. Z. (2015). Fine-grained evaluation on face detection in the wild. In IEEE international conference and workshops on automatic face and gesture recognition, vol. 1, pp. 1–7.
Zurück zum Zitat Yang, Q., Jung, C., Fu, Q., Song, H. (2018). Low light image denoising based on poisson noise model and weighted tv regularization. In Proceedings of IEEE international conference on image processing, pp. 3199–3203. Yang, Q., Jung, C., Fu, Q., Song, H. (2018). Low light image denoising based on poisson noise model and weighted tv regularization. In Proceedings of IEEE international conference on image processing, pp. 3199–3203.
Zurück zum Zitat Yang, S., Luo, P., Loy, C. C., Tang, X. (2016). Wider face: A face detection benchmark. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 5525–5533. Yang, S., Luo, P., Loy, C. C., Tang, X. (2016). Wider face: A face detection benchmark. In Proceedings of IEEE international conference on computer vision and pattern recognition, pp. 5525–5533.
Zurück zum Zitat Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J. (2020). From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of IEEE international conference on computer vision and pattern recognition. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J. (2020). From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of IEEE international conference on computer vision and pattern recognition.
Zurück zum Zitat Ye, Z., Mohamadian, H., Ye, Y. (2007). Discrete entropy and relative entropy study on nonlinear clustering of underwater and arial images. In IEEE international conference on control applications, pp. 313–318. Ye, Z., Mohamadian, H., Ye, Y. (2007). Discrete entropy and relative entropy study on nonlinear clustering of underwater and arial images. In IEEE international conference on control applications, pp. 313–318.
Zurück zum Zitat Yeganeh, H., & Wang, Z. (2013). Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing, 22(2), 657–667.MathSciNetCrossRef Yeganeh, H., & Wang, Z. (2013). Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing, 22(2), 657–667.MathSciNetCrossRef
Zurück zum Zitat Ying, Z., Li, G., Gao, W. (2017). A bio-inspired multi-exposure fusion framework for low-light image enhancement. ArXiv e-prints. Ying, Z., Li, G., Gao, W. (2017). A bio-inspired multi-exposure fusion framework for low-light image enhancement. ArXiv e-prints.
Zurück zum Zitat Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W. (2017). A new image contrast enhancement algorithm using exposure fusion framework. In Felsberg, M., Heyden, A., Krüger, N. (Eds.) Computer analysis of images and patterns, pp. 36–46. Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W. (2017). A new image contrast enhancement algorithm using exposure fusion framework. In Felsberg, M., Heyden, A., Krüger, N. (Eds.) Computer analysis of images and patterns, pp. 36–46.
Zurück zum Zitat Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W. (2017). A new low-light image enhancement algorithm using camera response model. In Proceedings of IEEE international conference on computer vision. Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W. (2017). A new low-light image enhancement algorithm using camera response model. In Proceedings of IEEE international conference on computer vision.
Zurück zum Zitat Yu, S., & Zhu, H. (2019). Low-illumination image enhancement algorithm based on a physical lighting model. IEEE Transactions on Circuits and Systems for Video Technology, 29(1), 28–37.CrossRef Yu, S., & Zhu, H. (2019). Low-illumination image enhancement algorithm based on a physical lighting model. IEEE Transactions on Circuits and Systems for Video Technology, 29(1), 28–37.CrossRef
Zurück zum Zitat Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). Fsim: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8), 2378–2386.MathSciNetCrossRef Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). Fsim: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8), 2378–2386.MathSciNetCrossRef
Zurück zum Zitat Zhang, L., Zhang, L., & Bovik, A. C. (2015). A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 24(8), 2579–2591.MathSciNetCrossRef Zhang, L., Zhang, L., & Bovik, A. C. (2015). A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 24(8), 2579–2591.MathSciNetCrossRef
Zurück zum Zitat Zhang, Q., Nie, Y., Zhang, L., & Xiao, C. (2016). Underexposed video enhancement via perception-driven progressive fusion. IEEE Transactions on Visualization and Computer Graphics, 22(6), 1773–1785.CrossRef Zhang, Q., Nie, Y., Zhang, L., & Xiao, C. (2016). Underexposed video enhancement via perception-driven progressive fusion. IEEE Transactions on Visualization and Computer Graphics, 22(6), 1773–1785.CrossRef
Zurück zum Zitat Zhang, S., Zhu, X., Lei, Z., Shi, H., Wang, X., Li, S. Z. (2017). S3fd: Single shot scale-invariant face detector. In Proceedings of IEEE international conference on computer vision, pp. 192–201. Zhang, S., Zhu, X., Lei, Z., Shi, H., Wang, X., Li, S. Z. (2017). S3fd: Single shot scale-invariant face detector. In Proceedings of IEEE international conference on computer vision, pp. 192–201.
Zurück zum Zitat Zhang, X., Shen, P., Luo, L., Zhang, L., Song, J. (2012). Enhancement and noise reduction of very low light level images. In Proceedings of IEEE international conference on pattern recognition, pp. 2034–2037. Zhang, X., Shen, P., Luo, L., Zhang, L., Song, J. (2012). Enhancement and noise reduction of very low light level images. In Proceedings of IEEE international conference on pattern recognition, pp. 2034–2037.
Zurück zum Zitat Zhang, Y., Zhang, J., Guo, X. (2019). Kindling the darkness: a practical low-light image enhancer. In ACM international conference on multimedia. Zhang, Y., Zhang, J., Guo, X. (2019). Kindling the darkness: a practical low-light image enhancer. In ACM international conference on multimedia.
Zurück zum Zitat Zhu, M., Pan, P., Chen, W., Yang, Y. (2020) Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. In Proceedings of AAAI conference on artificial intelligence. Zhu, M., Pan, P., Chen, W., Yang, Y. (2020) Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. In Proceedings of AAAI conference on artificial intelligence.
Metadaten
Titel
Benchmarking Low-Light Image Enhancement and Beyond
verfasst von
Jiaying Liu
Dejia Xu
Wenhan Yang
Minhao Fan
Haofeng Huang
Publikationsdatum
11.01.2021
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 4/2021
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-020-01418-8

Weitere Artikel der Ausgabe 4/2021

International Journal of Computer Vision 4/2021 Zur Ausgabe