Skip to main content
Erschienen in:

31.05.2024

Self-Supervised Normalizing Flow for Jointing Low-Light Enhancement and Deblurring

verfasst von: Lingyan Li, Chunzi Zhu, Jiale Chen, Baoshun Shi, Qiusheng Lian

Erschienen in: Circuits, Systems, and Signal Processing | Ausgabe 9/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Low-light image enhancement algorithms have been widely developed. Nevertheless, using long exposure under low-light conditions will lead to motion blurs of the captured images, which presents a challenge to address low-light enhancement and deblurring jointly. A recent effort called LEDNet addresses these issues by designing a encoder-decoder pipeline. However, LEDNet relies on paired data during training, but capturing low-blur and normal-sharp images of the same visual scene simultaneously is challenging. To overcome these challenges, we propose a self-supervised normalizing flow called SSFlow for jointing low-light enhancement and deblurring. SSFlow consists of two modules: an orthogonal channel attention U-Net (OAtt-UNet) module for extracting features, and a normalizing flow for correcting color and denoising (CCD flow). During the training of the SSFlow, the two modules are connected to each other by a color map. Concretely, OAtt-UNet module is a variant of U-Net consisting of an encoder and a decoder. OAtt-UNet module takes a low-light blurry image as input, and incorporates an orthogonal channel attention block into the encoder to improve the representation ability of the overall network. The filter adaptive convolutional layer is integrated into the decoder, applying a dynamic convolution filter to each element of the feature for effective deblurring. To extract color information and denoise, the CCD flow makes full use of the powerful learning ability of the normalizing flow. We construct an unsupervised loss function, continuously optimizing the network by using the consistent color map between the two modules in the color space. The effectiveness of our proposed network is demonstrated through both qualitative and quantitative experiments. Code is available at https://​github.​com/​shibaoshun/​SSFlow.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

ATZelektronik

Die Fachzeitschrift ATZelektronik bietet für Entwickler und Entscheider in der Automobil- und Zulieferindustrie qualitativ hochwertige und fundierte Informationen aus dem gesamten Spektrum der Pkw- und Nutzfahrzeug-Elektronik. 

Lassen Sie sich jetzt unverbindlich 2 kostenlose Ausgabe zusenden.

ATZelectronics worldwide

ATZlectronics worldwide is up-to-speed on new trends and developments in automotive electronics on a scientific level with a high depth of information. 

Order your 30-days-trial for free and without any commitment.

Weitere Produktempfehlungen anzeigen
Literatur
2.
Zurück zum Zitat F. Albu, C. Florea, A. Drimbarean et al., Adaptive recovery of motion blur point spread function from differently exposed images. In Proceedings of the Digital Photography VI, pp. 212–219 (2010) F. Albu, C. Florea, A. Drimbarean et al., Adaptive recovery of motion blur point spread function from differently exposed images. In Proceedings of the Digital Photography VI, pp. 212–219 (2010)
3.
Zurück zum Zitat L. Alzubaidi, J. Bai, A. Al-Sabaawi et al., A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications. Journal of Big Data 10, 1–82 (2023)CrossRef L. Alzubaidi, J. Bai, A. Al-Sabaawi et al., A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications. Journal of Big Data 10, 1–82 (2023)CrossRef
10.
Zurück zum Zitat D. Gong, J. Yang, L. Liu, et al., From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3806–3815 (2017). https://doi.org/10.1109/CVPR.2017.405 D. Gong, J. Yang, L. Liu, et al., From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3806–3815 (2017). https://​doi.​org/​10.​1109/​CVPR.​2017.​405
16.
Zurück zum Zitat D.P. Kingma, P. Dhariwal, Glow: Generative flow with invertible \(1\times 1\) convolutions. In Proceedings of the Advances in Neural Information Processing Systems (2018) D.P. Kingma, P. Dhariwal, Glow: Generative flow with invertible \(1\times 1\) convolutions. In Proceedings of the Advances in Neural Information Processing Systems (2018)
21.
Zurück zum Zitat R. Liu, L. Ma, J. Zhang, et al., Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10556–10565 (2021). https://doi.org/10.1109/CVPR46437.2021.01042 R. Liu, L. Ma, J. Zhang, et al., Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10556–10565 (2021). https://​doi.​org/​10.​1109/​CVPR46437.​2021.​01042
24.
Zurück zum Zitat L. Ma, T. Ma, R. Liu, et al., Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5637–5646 (2022) L. Ma, T. Ma, R. Liu, et al., Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5637–5646 (2022)
29.
Zurück zum Zitat S. Nah , T.H. Kim, K.M. Lee, Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 257–265 (2017). https://doi.org/10.1109/CVPR.2017.35 S. Nah , T.H. Kim, K.M. Lee, Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 257–265 (2017). https://​doi.​org/​10.​1109/​CVPR.​2017.​35
31.
37.
Zurück zum Zitat J. Wang, Y. Chen, Z. Dong et al., Improved yolov5 network for real-time multi-scale traffic sign detection. Neural Computing and Applications 35(10), 7853–7865 (2023)CrossRef J. Wang, Y. Chen, Z. Dong et al., Improved yolov5 network for real-time multi-scale traffic sign detection. Neural Computing and Applications 35(10), 7853–7865 (2023)CrossRef
42.
Zurück zum Zitat C. Wei, W. Wang, W. Yang, et al., Deep retinex decomposition for low-light enhancement. In Proceedings of the British Machine Vision Conference (2018) C. Wei, W. Wang, W. Yang, et al., Deep retinex decomposition for low-light enhancement. In Proceedings of the British Machine Vision Conference (2018)
45.
46.
Zurück zum Zitat Y. Zhang, X. Di, B. Zhang, et al., Self-supervised image enhancement network: training with low light images only (2020). arXiv:2002.11300 Y. Zhang, X. Di, B. Zhang, et al., Self-supervised image enhancement network: training with low light images only (2020). arXiv:​2002.​11300
Metadaten
Titel
Self-Supervised Normalizing Flow for Jointing Low-Light Enhancement and Deblurring
verfasst von
Lingyan Li
Chunzi Zhu
Jiale Chen
Baoshun Shi
Qiusheng Lian
Publikationsdatum
31.05.2024
Verlag
Springer US
Erschienen in
Circuits, Systems, and Signal Processing / Ausgabe 9/2024
Print ISSN: 0278-081X
Elektronische ISSN: 1531-5878
DOI
https://doi.org/10.1007/s00034-024-02723-0