Skip to main content
Erschienen in: International Journal of Machine Learning and Cybernetics 5/2023

29.11.2022 | Original Article

Near-infrared fusion for deep lightness enhancement

verfasst von: Linbo Wang, Tao Wang, Deyun Yang, Xianyong Fang, Shaohua Wan

Erschienen in: International Journal of Machine Learning and Cybernetics | Ausgabe 5/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Lightness enhancement is a long-standing research topic in computer vision. Existing deep learning-based approaches usually extract features from the low-light image to model the enlightening process, which may fall short of robustness since low-light features can be unreliable in heavily dark regions. Inspired by the fact that infrared imaging is immune to illumination variation, we propose to exploit an extra infrared image to help brighten the low-light one. Specifically, we design a deep convolutional neural network to jointly extract the infrared and low-light features and produce a normal-light image under the supervision of multi-scale loss functions, including a discriminator loss that enforces the network output image to mimic a real one. Moreover, a contextual attention module is proposed to reconstruct reliable low-light features in heavily dark regions by exploring feature correlation consistency among low-light and infrared features. Extensive experiments on two composited and one real-world datasets demonstrate the superiority of the proposed approach over existing methods qualitatively and quantitatively.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat Pisano ED, Zong S, Hemminger BM, DeLuca M, Johnston RE, Muller K, Braeuning MP, Pizer SM (1998) Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J Digital Imaging 11(4):193CrossRef Pisano ED, Zong S, Hemminger BM, DeLuca M, Johnston RE, Muller K, Braeuning MP, Pizer SM (1998) Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J Digital Imaging 11(4):193CrossRef
2.
Zurück zum Zitat Abdullah-Al-Wadud M, Kabir MH, Dewan MAA, Chae O (2007) A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consumer Electronics 53(2):593–600CrossRef Abdullah-Al-Wadud M, Kabir MH, Dewan MAA, Chae O (2007) A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consumer Electronics 53(2):593–600CrossRef
3.
4.
Zurück zum Zitat Lee C, Lee C, Kim C-S (2013) Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans Image Process 22(12):5372–5384CrossRef Lee C, Lee C, Kim C-S (2013) Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans Image Process 22(12):5372–5384CrossRef
5.
Zurück zum Zitat Jobson DJ, Rahman Z-U, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462CrossRef Jobson DJ, Rahman Z-U, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462CrossRef
6.
Zurück zum Zitat Jobson DJ, Rahman Z-U, Woodell GA (1997) A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6(7):965–976CrossRef Jobson DJ, Rahman Z-U, Woodell GA (1997) A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6(7):965–976CrossRef
7.
Zurück zum Zitat Wang S, Zheng J, Hu H-M, Li B (2013) Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 22(9):3538–3548CrossRef Wang S, Zheng J, Hu H-M, Li B (2013) Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 22(9):3538–3548CrossRef
8.
Zurück zum Zitat Fu X, Zeng D, Huang Y, Liao Y, Ding X, Paisley J (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96CrossRef Fu X, Zeng D, Huang Y, Liao Y, Ding X, Paisley J (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96CrossRef
9.
Zurück zum Zitat Fu X, Zeng D, Huang Y, Zhang X.-P, Ding X (2016) “A weighted variational model for simultaneous reflectance and illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2782–2790 Fu X, Zeng D, Huang Y, Zhang X.-P, Ding X (2016) “A weighted variational model for simultaneous reflectance and illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2782–2790
10.
Zurück zum Zitat Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841MathSciNetCrossRefMATH Li M, Liu J, Yang W, Sun X, Guo Z (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841MathSciNetCrossRefMATH
11.
Zurück zum Zitat Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993MathSciNetCrossRefMATH Guo X, Li Y, Ling H (2016) Lime: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993MathSciNetCrossRefMATH
12.
Zurück zum Zitat Lore KG, Akintayo A, Sarkar S (2017) Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662CrossRef Lore KG, Akintayo A, Sarkar S (2017) Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662CrossRef
13.
Zurück zum Zitat Wang W, Wei C, Yang W, Liu J (2018) “Gladnet: Low-light enhancement network with global awareness,” In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, pp. 751–755 Wang W, Wei C, Yang W, Liu J (2018) “Gladnet: Low-light enhancement network with global awareness,” In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, pp. 751–755
14.
Zurück zum Zitat Lv F, Lu F, Wu J, Lim C (2018) “Mbllen: low-light image/video enhancement using cnns.” In BMVC, p. 220 Lv F, Lu F, Wu J, Lim C (2018) “Mbllen: low-light image/video enhancement using cnns.” In BMVC, p. 220
15.
Zurück zum Zitat Li J, Li J, Fang F, Li F, Zhang G (2020) “Luminance-aware pyramid network for low-light image enhancement,” IEEE Trans Multimedia Li J, Li J, Fang F, Li F, Zhang G (2020) “Luminance-aware pyramid network for low-light image enhancement,” IEEE Trans Multimedia
16.
Zurück zum Zitat Wang L-W, Liu Z-S, Siu W-C, Lun DP (2020) Lightening network for low-light image enhancement. IEEE Trans Image Process 29:7984–7996CrossRefMATH Wang L-W, Liu Z-S, Siu W-C, Lun DP (2020) Lightening network for low-light image enhancement. IEEE Trans Image Process 29:7984–7996CrossRefMATH
17.
Zurück zum Zitat Lim S, Kim W (2020) “Dslr: deep stacked laplacian restorer for low-light image enhancement,” IEEE Trans Multimedia Lim S, Kim W (2020) “Dslr: deep stacked laplacian restorer for low-light image enhancement,” IEEE Trans Multimedia
18.
Zurück zum Zitat Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062MathSciNetCrossRefMATH Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062MathSciNetCrossRefMATH
19.
Zurück zum Zitat Zhu M, Pan P, Chen W, Yang Y (2020) “Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network,” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 13 106–13 113 Zhu M, Pan P, Chen W, Yang Y (2020) “Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network,” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 13 106–13 113
20.
Zurück zum Zitat Lu K, Zhang L (2020) “Tbefn: a two-branch exposure-fusion network for low-light image enhancement,” IEEE Trans Multimedia Lu K, Zhang L (2020) “Tbefn: a two-branch exposure-fusion network for low-light image enhancement,” IEEE Trans Multimedia
21.
Zurück zum Zitat Lv F, Liu B, Lu F (2020) “Fast enhancement for non-uniform illumination images using light-weight cnns,” In Proceedings of the 28th ACM International Conference on Multimedia, pp. 1450–1458 Lv F, Liu B, Lu F (2020) “Fast enhancement for non-uniform illumination images using light-weight cnns,” In Proceedings of the 28th ACM International Conference on Multimedia, pp. 1450–1458
22.
Zurück zum Zitat Wei C, Wang W, Yang W, Liu J (2018) “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference Wei C, Wang W, Yang W, Liu J (2018) “Deep retinex decomposition for low-light enhancement,” in British Machine Vision Conference
23.
Zurück zum Zitat Zhang Y, Zhang J, Guo X (2019) “Kindling the darkness: A practical low-light image enhancer,” In Proceedings of the 27th ACM international conference on multimedia, pp. 1632–1640 Zhang Y, Zhang J, Guo X (2019) “Kindling the darkness: A practical low-light image enhancer,” In Proceedings of the 27th ACM international conference on multimedia, pp. 1632–1640
24.
Zurück zum Zitat Zhang Y, Guo X, Ma J, Liu W, Zhang J (2021) Beyond brightening low-light images. Int J Computer Vision 129(4):1013–1037CrossRef Zhang Y, Guo X, Ma J, Liu W, Zhang J (2021) Beyond brightening low-light images. Int J Computer Vision 129(4):1013–1037CrossRef
25.
Zurück zum Zitat Wang R, Zhang Q, Fu C.-W, Shen X, Zheng W.-S, Jia J (2019) “Underexposed photo enhancement using deep illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6849–6857 Wang R, Zhang Q, Fu C.-W, Shen X, Zheng W.-S, Jia J (2019) “Underexposed photo enhancement using deep illumination estimation,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6849–6857
26.
Zurück zum Zitat Wang Y, Cao Y, Zha Z.-J, Zhang J, Xiong Z, Zhang W, Wu F (2019) “Progressive retinex: mutually reinforced illumination-noise perception network for low-light image enhancement,” In Proceedings of the 27th ACM international conference on multimedia, pp. 2015–2023 Wang Y, Cao Y, Zha Z.-J, Zhang J, Xiong Z, Zhang W, Wu F (2019) “Progressive retinex: mutually reinforced illumination-noise perception network for low-light image enhancement,” In Proceedings of the 27th ACM international conference on multimedia, pp. 2015–2023
27.
Zurück zum Zitat Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J (2017) “Msr-net: low-light image enhancement using deep convolutional network,” arXiv preprint arXiv:1711.02488 Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J (2017) “Msr-net: low-light image enhancement using deep convolutional network,” arXiv preprint arXiv:​1711.​02488
28.
Zurück zum Zitat Ignatov A, Kobyshev N, Timofte R, Vanhoey K, Van Gool L (2018) “Wespe: weakly supervised photo enhancer for digital cameras,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 691–700 Ignatov A, Kobyshev N, Timofte R, Vanhoey K, Van Gool L (2018) “Wespe: weakly supervised photo enhancer for digital cameras,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 691–700
29.
Zurück zum Zitat Zhang L, Zhang L, Liu X, Shen Y, Zhang S, Zhao S (2019) “Zero-shot restoration of back-lit images using deep internal learning,” In Proceedings of the 27th ACM International Conference on Multimedia, pp. 1623–1631 Zhang L, Zhang L, Liu X, Shen Y, Zhang S, Zhao S (2019) “Zero-shot restoration of back-lit images using deep internal learning,” In Proceedings of the 27th ACM International Conference on Multimedia, pp. 1623–1631
30.
Zurück zum Zitat Guo C, Li C, Guo J, Loy C. C, Hou J, Kwong S, Cong R (2020) “Zero-reference deep curve estimation for low-light image enhancement,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1780–1789 Guo C, Li C, Guo J, Loy C. C, Hou J, Kwong S, Cong R (2020) “Zero-reference deep curve estimation for low-light image enhancement,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1780–1789
31.
Zurück zum Zitat Wang H, Zhang D, Ding S, Gao Z, Feng J, Wan S (2021) “Rib segmentation algorithm for x-ray image based on unpaired sample augmentation and multi-scale network,” Neural Computing Appl, 1–15 Wang H, Zhang D, Ding S, Gao Z, Feng J, Wan S (2021) “Rib segmentation algorithm for x-ray image based on unpaired sample augmentation and multi-scale network,” Neural Computing Appl, 1–15
32.
Zurück zum Zitat Wang L, Li M, Fang X, Nappi M, Wan S (2022) Improving random walker segmentation using a nonlocal bipartite graph. Biomed Signal Process Control 71:103154CrossRef Wang L, Li M, Fang X, Nappi M, Wan S (2022) Improving random walker segmentation using a nonlocal bipartite graph. Biomed Signal Process Control 71:103154CrossRef
33.
Zurück zum Zitat Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graphics (ToG) 36(4):1–14CrossRef Iizuka S, Simo-Serra E, Ishikawa H (2017) Globally and locally consistent image completion. ACM Trans Graphics (ToG) 36(4):1–14CrossRef
34.
Zurück zum Zitat Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T. S (2018) “Generative image inpainting with contextual attention,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505–5514 Yu J, Lin Z, Yang J, Shen X, Lu X, Huang T. S (2018) “Generative image inpainting with contextual attention,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505–5514
35.
Zurück zum Zitat Harley A. W, Derpanis K. G, Kokkinos I (2017) “Segmentation-aware convolutional networks using local attention masks,” In Proceedings of the IEEE International Conference on Computer Vision, pp. 5038–5047 Harley A. W, Derpanis K. G, Kokkinos I (2017) “Segmentation-aware convolutional networks using local attention masks,” In Proceedings of the IEEE International Conference on Computer Vision, pp. 5038–5047
36.
Zurück zum Zitat Gou J, Sun L, Yu B, Wan S, Ou W, Yi Z (2022) “Multi-level attention-based sample correlations for knowledge distillation,” IEEE Trans Indus Inform Gou J, Sun L, Yu B, Wan S, Ou W, Yi Z (2022) “Multi-level attention-based sample correlations for knowledge distillation,” IEEE Trans Indus Inform
37.
Zurück zum Zitat Liu N, Han J, Yang M.-H (2018) “Picanet: learning pixel-wise contextual attention for saliency detection,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098 Liu N, Han J, Yang M.-H (2018) “Picanet: learning pixel-wise contextual attention for saliency detection,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098
38.
Zurück zum Zitat Zhang Y, Zhang F, Jin Y, Cen Y, Voronin V, Wan S (2022) “Local correlation ensemble with gcn based on attention features for cross-domain person re-id,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) Zhang Y, Zhang F, Jin Y, Cen Y, Voronin V, Wan S (2022) “Local correlation ensemble with gcn based on attention features for cross-domain person re-id,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)
39.
Zurück zum Zitat Johnson J, Alahi A, Fei-Fei L (2016) “Perceptual losses for real-time style transfer and super-resolution,” In European conference on computer vision. Springer, pp. 694–711 Johnson J, Alahi A, Fei-Fei L (2016) “Perceptual losses for real-time style transfer and super-resolution,” In European conference on computer vision. Springer, pp. 694–711
40.
Zurück zum Zitat Liu G, Reda F. A, Shih K. J, Wang T.-C, Tao A, Catanzaro B (2018) “Image inpainting for irregular holes using partial convolutions,” In Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100 Liu G, Reda F. A, Shih K. J, Wang T.-C, Tao A, Catanzaro B (2018) “Image inpainting for irregular holes using partial convolutions,” In Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100
41.
Zurück zum Zitat Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Computer Vision 115(3):211–252MathSciNetCrossRef Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Computer Vision 115(3):211–252MathSciNetCrossRef
42.
Zurück zum Zitat Simonyan K, Zisserman A (2014) “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 Simonyan K, Zisserman A (2014) “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:​1409.​1556
43.
45.
Zurück zum Zitat Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A. C (2017) Improved training of wasserstein gans, Adv Neural Information Process Syst, 5767–5777 Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville A. C (2017) Improved training of wasserstein gans, Adv Neural Information Process Syst, 5767–5777
46.
Zurück zum Zitat Brown M, Süsstrunk S (2011) Multi-spectral sift for scene category recognition, in CVPR. IEEE 2011:177–184 Brown M, Süsstrunk S (2011) Multi-spectral sift for scene category recognition, in CVPR. IEEE 2011:177–184
47.
Zurück zum Zitat Salamati N, Larlus D, Csurka G, Süsstrunk S (2014) “Incorporating near-infrared information into semantic image segmentation,” arXiv preprint arXiv:1406.6147 Salamati N, Larlus D, Csurka G, Süsstrunk S (2014) “Incorporating near-infrared information into semantic image segmentation,” arXiv preprint arXiv:​1406.​6147
Metadaten
Titel
Near-infrared fusion for deep lightness enhancement
verfasst von
Linbo Wang
Tao Wang
Deyun Yang
Xianyong Fang
Shaohua Wan
Publikationsdatum
29.11.2022
Verlag
Springer Berlin Heidelberg
Erschienen in
International Journal of Machine Learning and Cybernetics / Ausgabe 5/2023
Print ISSN: 1868-8071
Elektronische ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-022-01716-2

Weitere Artikel der Ausgabe 5/2023

International Journal of Machine Learning and Cybernetics 5/2023 Zur Ausgabe