Skip to main content
Erschienen in: International Journal of Computer Vision 7/2021

07.05.2021

Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset

verfasst von: Feifan Lv, Yu Li, Feng Lu

Erschienen in: International Journal of Computer Vision | Ausgabe 7/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Low-light image enhancement is challenging in that it needs to consider not only brightness recovery but also complex issues like color distortion and noise, which usually hide in the dark. Simply adjusting the brightness of a low-light image will inevitably amplify those artifacts. To address this difficult problem, this paper proposes a novel end-to-end attention-guided method based on multi-branch convolutional neural network. To this end, we first construct a synthetic dataset with carefully designed low-light simulation strategies. The dataset is much larger and more diverse than existing ones. With the new dataset for training, our method learns two attention maps to guide the brightness enhancement and denoising tasks respectively. The first attention map distinguishes underexposed regions from well lit regions, and the second attention map distinguishes noises from real textures. With their guidance, the proposed multi-branch decomposition-and-fusion enhancement network works in an input adaptive way. Moreover, a reinforcement-net further enhances color and contrast of the output image. Extensive experiments on multiple datasets demonstrate that our method can produce high fidelity enhancement results for low-light images and outperforms the current state-of-the-art methods by a large margin both quantitatively and visually.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:​1603.​04467 (2016)
Zurück zum Zitat Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Süsstrunk, S. (2012). Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(11), 2274–2282.CrossRef Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Süsstrunk, S. (2012). Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(11), 2274–2282.CrossRef
Zurück zum Zitat Arici, T., Dikbas, S., & Altunbasak, Y. (2009). A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on image processing (TIP), 18(9), 1921–1935.MathSciNetCrossRefMATH Arici, T., Dikbas, S., & Altunbasak, Y. (2009). A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on image processing (TIP), 18(9), 1921–1935.MathSciNetCrossRefMATH
Zurück zum Zitat Azzari, L., & Foi, A. (2016). Variance stabilization for noisy+ estimate combination in iterative poisson denoising. IEEE signal processing letters, 23(8), 1086–1090.CrossRef Azzari, L., & Foi, A. (2016). Variance stabilization for noisy+ estimate combination in iterative poisson denoising. IEEE signal processing letters, 23(8), 1086–1090.CrossRef
Zurück zum Zitat Bileschi, S. M. (2006). Streetscenes: Towards scene understanding in still images. MASSACHUSETTS INST OF TECH CAMBRIDGE: Tech. rep. Bileschi, S. M. (2006). Streetscenes: Towards scene understanding in still images. MASSACHUSETTS INST OF TECH CAMBRIDGE: Tech. rep.
Zurück zum Zitat Cai, J., Gu, S., & Zhang, L. (2018). Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing (TIP), 27(4), 2049–2062.MathSciNetCrossRefMATH Cai, J., Gu, S., & Zhang, L. (2018). Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing (TIP), 27(4), 2049–2062.MathSciNetCrossRefMATH
Zurück zum Zitat Celik, T., & Tjahjadi, T. (2011). Contextual and variational contrast enhancement. IEEE Transactions on Image Processing (TIP), 20(12), 3431–3441.MathSciNetCrossRefMATH Celik, T., & Tjahjadi, T. (2011). Contextual and variational contrast enhancement. IEEE Transactions on Image Processing (TIP), 20(12), 3431–3441.MathSciNetCrossRefMATH
Zurück zum Zitat Chen, C., Chen, Q.C., Xu, J., Koltun, V.: Learning to see in the dark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018) Chen, C., Chen, Q.C., Xu, J., Koltun, V.: Learning to see in the dark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Zurück zum Zitat Chen, J., Chen, J., Chao, H., Yang, M (2018) Image blind denoising with generative adversarial network based noise modeling. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Chen, J., Chen, J., Chao, H., Yang, M (2018) Image blind denoising with generative adversarial network based noise modeling. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Zurück zum Zitat Chen, Q., xu, J., Koltun, V.: Fast image processing with fully-convolutional networks. IEEE International Conference on Computer Vision (ICCV) (2017) Chen, Q., xu, J., Koltun, V.: Fast image processing with fully-convolutional networks. IEEE International Conference on Computer Vision (ICCV) (2017)
Zurück zum Zitat Chen, Y.S., Wang, Y.C., Kao, M.H., Chuang, Y.Y.: Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018) Chen, Y.S., Wang, Y.C., Kao, M.H., Chuang, Y.Y.: Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Zurück zum Zitat Chen, Z., Abidi, B. R., Page, D. L., & Abidi, M. A. (2006). Gray-level grouping (glg): an automatic method for optimized image contrast enhancement-part i: the basic method. IEEE transactions on image processing (TIP), 15(8), 2290–2302.CrossRef Chen, Z., Abidi, B. R., Page, D. L., & Abidi, M. A. (2006). Gray-level grouping (glg): an automatic method for optimized image contrast enhancement-part i: the basic method. IEEE transactions on image processing (TIP), 15(8), 2290–2302.CrossRef
Zurück zum Zitat Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2006). Image denoising with block-matching and 3d filtering. Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, 6064, 606414. Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2006). Image denoising with block-matching and 3d filtering. Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, 6064, 606414.
Zurück zum Zitat Dai, D., Sakaridis, C., Hecker, S., Van Gool, L.: Model adaptation with synthetic and real data for semantic dense foggy scene understanding. International Journal of Computer Vision (IJCV) (2019) Dai, D., Sakaridis, C., Hecker, S., Van Gool, L.: Model adaptation with synthetic and real data for semantic dense foggy scene understanding. International Journal of Computer Vision (IJCV) (2019)
Zurück zum Zitat Dai, D., Van Gool, L.: Dark model adaptation: Semantic image segmentation from daytime to nighttime. In IEEE International Conference on Intelligent Transportation Systems (2018) Dai, D., Van Gool, L.: Dark model adaptation: Semantic image segmentation from daytime to nighttime. In IEEE International Conference on Intelligent Transportation Systems (2018)
Zurück zum Zitat Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., Lu, Y.: Fast efficient algorithm for enhancement of low lighting video. In IEEE International Conference on Multimedia and Expo (ICME) (2011) Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., Lu, Y.: Fast efficient algorithm for enhancement of low lighting video. In IEEE International Conference on Multimedia and Expo (ICME) (2011)
Zurück zum Zitat Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision (IJCV), 88(2), 303–338.CrossRef Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision (IJCV), 88(2), 303–338.CrossRef
Zurück zum Zitat Fu, X., Zeng, D., Huang, Y., Liao, Y., Ding, X., & Paisley, J. (2016). A fusion-based enhancing method for weakly illuminated images. Signal Processing, 129, 82–96.CrossRef Fu, X., Zeng, D., Huang, Y., Liao, Y., Ding, X., & Paisley, J. (2016). A fusion-based enhancing method for weakly illuminated images. Signal Processing, 129, 82–96.CrossRef
Zurück zum Zitat Fu, X., Zeng, D., Huang, Y., Zhang, X.P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) Fu, X., Zeng, D., Huang, Y., Zhang, X.P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Zurück zum Zitat Gharbi, M., Chen, J., Barron, J. T., Hasinoff, S. W., & Durand, F. (2017). Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG), 36(4), 118.CrossRef Gharbi, M., Chen, J., Barron, J. T., Hasinoff, S. W., & Durand, F. (2017). Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG), 36(4), 118.CrossRef
Zurück zum Zitat Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems (NIPS) (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems (NIPS) (2014)
Zurück zum Zitat Grossberg, M. D., & Nayar, S. K. (2004). Modeling the space of camera response functions. IEEE transactions on pattern analysis and machine intelligence, 26(10), 1272–1282.CrossRef Grossberg, M. D., & Nayar, S. K. (2004). Modeling the space of camera response functions. IEEE transactions on pattern analysis and machine intelligence, 26(10), 1272–1282.CrossRef
Zurück zum Zitat Grubinger, M., Clough, P., Müller, H., Deselaers, T.: The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In: Int. Workshop OntoImage (2006) Grubinger, M., Clough, P., Müller, H., Deselaers, T.: The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In: Int. Workshop OntoImage (2006)
Zurück zum Zitat Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Zurück zum Zitat Guo, X., Li, Y., & Ling, H. (2017). Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing (TIP), 26(2), 982–993.MathSciNetCrossRefMATH Guo, X., Li, Y., & Ling, H. (2017). Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing (TIP), 26(2), 982–993.MathSciNetCrossRefMATH
Zurück zum Zitat Hahner, M., Dai, D., Sakaridis, C., Zaech, J.N., Van Gool, L.: Semantic understanding of foggy scenes with purely synthetic data. In IEEE International Conference on Intelligent Transportation Systems (2019) Hahner, M., Dai, D., Sakaridis, C., Zaech, J.N., Van Gool, L.: Semantic understanding of foggy scenes with purely synthetic data. In IEEE International Conference on Intelligent Transportation Systems (2019)
Zurück zum Zitat Hasler, D., & Suesstrunk, S. E. (2003). Measuring colorfulness in natural images. Human vision and electronic imaging VIII, 5007, 87–96.CrossRef Hasler, D., & Suesstrunk, S. E. (2003). Measuring colorfulness in natural images. Human vision and electronic imaging VIII, 5007, 87–96.CrossRef
Zurück zum Zitat He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969 (2017) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969 (2017)
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (CVPR) (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (CVPR) (2016)
Zurück zum Zitat Hui, Z., Wang, X., Deng, L., Gao, X.: Perception-preserving convolutional networks for image enhancement on smartphones. In European Conference on Computer Vision Workshop (ECCVW) (2018) Hui, Z., Wang, X., Deng, L., Gao, X.: Perception-preserving convolutional networks for image enhancement on smartphones. In European Conference on Computer Vision Workshop (ECCVW) (2018)
Zurück zum Zitat Ibrahim, H., & Kong, N. S. P. (2007). Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(4), 1752–1758.CrossRef Ibrahim, H., & Kong, N. S. P. (2007). Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(4), 1752–1758.CrossRef
Zurück zum Zitat Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Dslr-quality photos on mobile devices with deep convolutional networks. In IEEE International Conference on Computer Vision (ICCV) (2017) Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Dslr-quality photos on mobile devices with deep convolutional networks. In IEEE International Conference on Computer Vision (ICCV) (2017)
Zurück zum Zitat Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Wespe: weakly supervised photo enhancer for digital cameras. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2018) Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Wespe: weakly supervised photo enhancer for digital cameras. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2018)
Zurück zum Zitat Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML) (2015) Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML) (2015)
Zurück zum Zitat Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z.: Enlightengan: Deep light enhancement without paired supervision. arXiv preprint arXiv:1906.06972 (2019) Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z.: Enlightengan: Deep light enhancement without paired supervision. arXiv preprint arXiv:​1906.​06972 (2019)
Zurück zum Zitat Jobson, D. J., Rahman, Zu, & Woodell, G. A. (1997). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image processing (TIP), 6(7), 965–976.CrossRef Jobson, D. J., Rahman, Zu, & Woodell, G. A. (1997). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image processing (TIP), 6(7), 965–976.CrossRef
Zurück zum Zitat Jobson, D. J., Rahman, Zu, & Woodell, G. A. (1997). Properties and performance of a center/surround retinex. IEEE Transactions on Image processing (TIP), 6(3), 451–462.CrossRef Jobson, D. J., Rahman, Zu, & Woodell, G. A. (1997). Properties and performance of a center/surround retinex. IEEE Transactions on Image processing (TIP), 6(3), 451–462.CrossRef
Zurück zum Zitat Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. IEEE conference on computer vision and pattern recognition (CVPR) (2017) Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. IEEE conference on computer vision and pattern recognition (CVPR) (2017)
Zurück zum Zitat Lee, C., Lee, C., & Kim, C. S. (2013). Contrast enhancement based on layered difference representation of 2d histograms. IEEE transactions on image processing (TIP), 22(12), 5372–5384.CrossRef Lee, C., Lee, C., & Kim, C. S. (2013). Contrast enhancement based on layered difference representation of 2d histograms. IEEE transactions on image processing (TIP), 22(12), 5372–5384.CrossRef
Zurück zum Zitat Lee, C.H., Shih, J.L., Lien, C.C., Han, C.C.: Adaptive multiscale retinex for image contrast enhancement. In Signal-Image Technology & Internet-Based Systems (SITIS) (2013) Lee, C.H., Shih, J.L., Lien, C.C., Han, C.C.: Adaptive multiscale retinex for image contrast enhancement. In Signal-Image Technology & Internet-Based Systems (SITIS) (2013)
Zurück zum Zitat Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing (TIP), 27(6), 2828–2841.MathSciNetCrossRefMATH Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing (TIP), 27(6), 2828–2841.MathSciNetCrossRefMATH
Zurück zum Zitat Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In European conference on computer vision (ECCV) (2014) Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In European conference on computer vision (ECCV) (2014)
Zurück zum Zitat Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In European conference on computer vision (ECCV), pp. 21–37. Springer (2016) Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In European conference on computer vision (ECCV), pp. 21–37. Springer (2016)
Zurück zum Zitat Lore, K. G., Akintayo, A., & Sarkar, S. (2017). Llnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition (PR), 61, 650–662.CrossRef Lore, K. G., Akintayo, A., & Sarkar, S. (2017). Llnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition (PR), 61, 650–662.CrossRef
Zurück zum Zitat Lv, F., Liu, B., Lu, F.: Fast enhancement for non-uniform illumination images using light-weight cnns. In ACM International Conference on Multimedia (ACM MM) (2020) Lv, F., Liu, B., Lu, F.: Fast enhancement for non-uniform illumination images using light-weight cnns. In ACM International Conference on Multimedia (ACM MM) (2020)
Zurück zum Zitat Lv, F., Lu, F., Wu, J., Lim, C.: Mbllen: Low-light image/video enhancement using cnns. In British Machine Vision Conference (BMVC) (2018) Lv, F., Lu, F., Wu, J., Lim, C.: Mbllen: Low-light image/video enhancement using cnns. In British Machine Vision Conference (BMVC) (2018)
Zurück zum Zitat Lv, F., Zheng, Y., Li, Y., Lu, F.: An integrated enhancement solution for 24-hour colorful imaging. In AAAI Conference on Artificial Intelligence (AAAI) (2020) Lv, F., Zheng, Y., Li, Y., Lu, F.: An integrated enhancement solution for 24-hour colorful imaging. In AAAI Conference on Artificial Intelligence (AAAI) (2020)
Zurück zum Zitat Lv, F., Zheng, Y., Zhang, B., Lu, F.: Turn a silicon camera into an ingaas camera. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Lv, F., Zheng, Y., Zhang, B., Lu, F.: Turn a silicon camera into an ingaas camera. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Zurück zum Zitat Ma, K., Zeng, K., & Wang, Z. (2015). Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing (TIP), 24(11), 3345–3356.MathSciNetCrossRefMATH Ma, K., Zeng, K., & Wang, Z. (2015). Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing (TIP), 24(11), 3345–3356.MathSciNetCrossRefMATH
Zurück zum Zitat Malvar, H.S., He, L.w., Cutler, R.: High-quality linear interpolation for demosaicing of bayer-patterned color images. In 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3, pp. iii–485. IEEE (2004) Malvar, H.S., He, L.w., Cutler, R.: High-quality linear interpolation for demosaicing of bayer-patterned color images. In 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3, pp. iii–485. IEEE (2004)
Zurück zum Zitat Mertens, T., Kautz, J., Van Reeth, F.: Exposure fusion. In: Computer Graphics and Applications, pp. 382–390 (2007) Mertens, T., Kautz, J., Van Reeth, F.: Exposure fusion. In: Computer Graphics and Applications, pp. 382–390 (2007)
Zurück zum Zitat Nakai, K., Hoshi, Y., Taguchi, A.: Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In: Intelligent Signal Processing and Communications Systems (ISPACS) (2013) Nakai, K., Hoshi, Y., Taguchi, A.: Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In: Intelligent Signal Processing and Communications Systems (ISPACS) (2013)
Zurück zum Zitat Pech-Pacheco, J. L., Cristóbal, G., Chamorro-Martinez, J., & Fernández-Valdivia, J. (2000). Diatom autofocusing in brightfield microscopy: a comparative study. Pattern Recognition (PR), 3, 314–317.CrossRef Pech-Pacheco, J. L., Cristóbal, G., Chamorro-Martinez, J., & Fernández-Valdivia, J. (2000). Diatom autofocusing in brightfield microscopy: a comparative study. Pattern Recognition (PR), 3, 314–317.CrossRef
Zurück zum Zitat Radenovic, F., Schonberger, J.L., Ji, D., Frahm, J.M., Chum, O., Matas, J.: From dusk till dawn: Modeling in the dark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5488–5496 (2016) Radenovic, F., Schonberger, J.L., Ji, D., Frahm, J.M., Chum, O., Matas, J.: From dusk till dawn: Modeling in the dark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5488–5496 (2016)
Zurück zum Zitat Remez, T., Litany, O., Giryes, R., Bronstein, A.M.: Deep convolutional denoising of low-light images. arXiv preprint arXiv:1701.01687 (2017) Remez, T., Litany, O., Giryes, R., Bronstein, A.M.: Deep convolutional denoising of low-light images. arXiv preprint arXiv:​1701.​01687 (2017)
Zurück zum Zitat Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497 (2015) Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:​1506.​01497 (2015)
Zurück zum Zitat Ren, W., Liu, S., Ma, L., Xu, Q., Xu, X., Cao, X., Du, J., Yang, M.H.: Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing (TIP) (2019) Ren, W., Liu, S., Ma, L., Xu, Q., Xu, X., Cao, X., Du, J., Yang, M.H.: Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing (TIP) (2019)
Zurück zum Zitat Ren, X., Li, M., Cheng, W.H., Liu, J.: Joint enhancement and denoising method via sequential decomposition. In IEEE International Symposium on Circuits and Systems (ISCAS) (2018) Ren, X., Li, M., Cheng, W.H., Liu, J.: Joint enhancement and denoising method via sequential decomposition. In IEEE International Symposium on Circuits and Systems (ISCAS) (2018)
Zurück zum Zitat Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (MICCAI) (2015) Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (MICCAI) (2015)
Zurück zum Zitat Sakaridis, C., Dai, D., , Van Gool, L.: Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. In International Conference on Computer Vision (ICCV) (2019) Sakaridis, C., Dai, D., , Van Gool, L.: Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. In International Conference on Computer Vision (ICCV) (2019)
Zurück zum Zitat Sakaridis, C., Dai, D., Hecker, S., Van Gool, L.: Model adaptation with synthetic and real data for semantic dense foggy scene understanding. In European Conference on Computer Vision (ECCV) (2018) Sakaridis, C., Dai, D., Hecker, S., Van Gool, L.: Model adaptation with synthetic and real data for semantic dense foggy scene understanding. In European Conference on Computer Vision (ECCV) (2018)
Zurück zum Zitat Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision (IJCV), 126(9), 973–992.CrossRef Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision (IJCV), 126(9), 973–992.CrossRef
Zurück zum Zitat Salmon, J., Harmany, Z., Deledalle, C. A., & Willett, R. (2014). Poisson noise reduction with non-local pca. Journal of mathematical imaging and vision, 48(2), 279–294.MathSciNetCrossRefMATH Salmon, J., Harmany, Z., Deledalle, C. A., & Willett, R. (2014). Poisson noise reduction with non-local pca. Journal of mathematical imaging and vision, 48(2), 279–294.MathSciNetCrossRefMATH
Zurück zum Zitat Sharma, V., Diba, A., Neven, D., Brown, M.S., Van Gool, L., Stiefelhagen, R.: Classification-driven dynamic image enhancement. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4033–4041 (2018) Sharma, V., Diba, A., Neven, D., Brown, M.S., Van Gool, L., Stiefelhagen, R.: Classification-driven dynamic image enhancement. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4033–4041 (2018)
Zurück zum Zitat Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on image processing (TIP), 15(2), 430–444.CrossRef Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on image processing (TIP), 15(2), 430–444.CrossRef
Zurück zum Zitat Shen, L., Yue, Z., Feng, F., Chen, Q., Liu, S., Ma, J.: Msr-net: Low-light image enhancement using deep convolutional network. arXiv preprint arXiv:1711.02488 (2017) Shen, L., Yue, Z., Feng, F., Chen, Q., Liu, S., Ma, J.: Msr-net: Low-light image enhancement using deep convolutional network. arXiv preprint arXiv:​1711.​02488 (2017)
Zurück zum Zitat Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)
Zurück zum Zitat de Stoutz, E., Ignatov, A., Kobyshev, N., Timofte, R., Van Gool, L.: Fast perceptual image enhancement. In European Conference on Computer Vision Workshop (ECCVW), pp. 260–275 (2018) de Stoutz, E., Ignatov, A., Kobyshev, N., Timofte, R., Van Gool, L.: Fast perceptual image enhancement. In European Conference on Computer Vision Workshop (ECCVW), pp. 260–275 (2018)
Zurück zum Zitat Tao, L., Zhu, C., Song, J., Lu, T., Jia, H., Xie, X (2017) Low-light image enhancement using cnn and bright channel prior. In IEEE International Conference on Image Processing (ICIP) Tao, L., Zhu, C., Song, J., Lu, T., Jia, H., Xie, X (2017) Low-light image enhancement using cnn and bright channel prior. In IEEE International Conference on Image Processing (ICIP)
Zurück zum Zitat Tao, L., Zhu, C., Xiang, G., Li, Y., Jia, H., Xie, X (2017) Llcnn: A convolutional neural network for low-light image enhancement. In Visual Communications and Image Processing (VCIP). Tao, L., Zhu, C., Xiang, G., Li, Y., Jia, H., Xie, X (2017) Llcnn: A convolutional neural network for low-light image enhancement. In Visual Communications and Image Processing (VCIP).
Zurück zum Zitat Tian, C., Xu, Y., & Zuo, W. (2020). Image denoising using deep cnn with batch renormalization. Neural Networks, 121, 461–473.CrossRef Tian, C., Xu, Y., & Zuo, W. (2020). Image denoising using deep cnn with batch renormalization. Neural Networks, 121, 461–473.CrossRef
Zurück zum Zitat Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Zurück zum Zitat Wang, S., Zheng, J., Hu, H. M., & Li, B. (2013). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing (TIP), 22(9), 3538–3548.CrossRef Wang, S., Zheng, J., Hu, H. M., & Li, B. (2013). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing (TIP), 22(9), 3538–3548.CrossRef
Zurück zum Zitat Wang, W., Wei, C., Yang, W., Liu, J. (2018) Gladnet: Low-light enhancement network with global awareness. In IEEE International Conference on Automatic Face & Gesture Recognition (FG). Wang, W., Wei, C., Yang, W., Liu, J. (2018) Gladnet: Low-light enhancement network with global awareness. In IEEE International Conference on Automatic Face & Gesture Recognition (FG).
Zurück zum Zitat Wang, Y., Cao, Y., Zha, Z.J., Zhang, J., Xiong, Z., Zhang, W., Wu, F (2019) Progressive retinex: Mutually reinforced illumination-noise perception network for low-light image enhancement. In ACM International Conference on Multimedia (ACM MM), pp. 2015–2023. Wang, Y., Cao, Y., Zha, Z.J., Zhang, J., Xiong, Z., Zhang, W., Wu, F (2019) Progressive retinex: Mutually reinforced illumination-noise perception network for low-light image enhancement. In ACM International Conference on Multimedia (ACM MM), pp. 2015–2023.
Zurück zum Zitat Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing (TIP), 13(4), 600–612.CrossRef Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing (TIP), 13(4), 600–612.CrossRef
Zurück zum Zitat Wei, C., Wang, W., Yang, W., Liu, J (2018) Deep retinex decomposition for low-light enhancement. In British Machine Vision Conference (BMVC). Wei, C., Wang, W., Yang, W., Liu, J (2018) Deep retinex decomposition for low-light enhancement. In British Machine Vision Conference (BMVC).
Zurück zum Zitat Xu, J., Zhang, L., Zhang, D.: A trilateral weighted sparse coding scheme for real-world image denoising. European Conference on Computer Vision (ECCV) (2018) Xu, J., Zhang, L., Zhang, D.: A trilateral weighted sparse coding scheme for real-world image denoising. European Conference on Computer Vision (ECCV) (2018)
Zurück zum Zitat Xu, L., Lu, C., Xu, Y., & Jia, J. (2011). Image smoothing via l 0 gradient minimization. ACM Transactions on Graphics (TOG), 30(6), 174–185. Xu, L., Lu, C., Xu, Y., & Jia, J. (2011). Image smoothing via l 0 gradient minimization. ACM Transactions on Graphics (TOG), 30(6), 174–185.
Zurück zum Zitat Yamashita, H., Sugimura, D., & Hamamoto, T. (2017). Low-light color image enhancement via iterative noise reduction using rgb/nir sensor. Journal of Electronic Imaging, 26(4), 043017.CrossRef Yamashita, H., Sugimura, D., & Hamamoto, T. (2017). Low-light color image enhancement via iterative noise reduction using rgb/nir sensor. Journal of Electronic Imaging, 26(4), 043017.CrossRef
Zurück zum Zitat Yeganeh, H., & Wang, Z. (2013). Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing (TIP), 22(2), 657–667.MathSciNetCrossRefMATH Yeganeh, H., & Wang, Z. (2013). Objective quality assessment of tone-mapped images. IEEE Transactions on Image Processing (TIP), 22(2), 657–667.MathSciNetCrossRefMATH
Zurück zum Zitat Ying, Z., Li, G., Gao, W (2017) A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv preprint arXiv:1711.00591. Ying, Z., Li, G., Gao, W (2017) A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv preprint arXiv:​1711.​00591.
Zurück zum Zitat Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W (2017) A new low-light image enhancement algorithm using camera response model. In IEEE International Conference on Computer Vision (ICCV). Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W (2017) A new low-light image enhancement algorithm using camera response model. In IEEE International Conference on Computer Vision (ICCV).
Zurück zum Zitat Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W (2017) A new low-light image enhancement algorithm using camera response model. In IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 3015–3022. Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W (2017) A new low-light image enhancement algorithm using camera response model. In IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 3015–3022.
Zurück zum Zitat Yuan, Y., Yang, W., Ren, W., Liu, J., Scheirer, W.J., Wang, Z (2019) UG+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments. arXiv e-prints arXiv:1904.04474 Yuan, Y., Yang, W., Ren, W., Liu, J., Scheirer, W.J., Wang, Z (2019) UG+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments. arXiv e-prints arXiv:​1904.​04474
Zurück zum Zitat Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing (TIP), 26(7), 3142–3155.MathSciNetCrossRefMATH Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing (TIP), 26(7), 3142–3155.MathSciNetCrossRefMATH
Zurück zum Zitat Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In IEEE conference on computer vision and pattern recognition (CVPR). Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In IEEE conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Zheng, Y., Zhang, M., Lu, F (2020) Optical flow in the dark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zheng, Y., Zhang, M., Lu, F (2020) Optical flow in the dark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Metadaten
Titel
Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset
verfasst von
Feifan Lv
Yu Li
Feng Lu
Publikationsdatum
07.05.2021
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 7/2021
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-021-01466-8

Weitere Artikel der Ausgabe 7/2021

International Journal of Computer Vision 7/2021 Zur Ausgabe

Premium Partner