Skip to main content

2021 | OriginalPaper | Buchkapitel

Low-Light Color Imaging via Dual Camera Acquisition

verfasst von : Peiyao Guo, Zhan Ma

Erschienen in: Computer Vision – ACCV 2020

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

As existing low-light color imaging suffers from the unrealistic color representation or blurry texture with a single camera setup, we are motivated to devise a dual camera system using a high spatial resolution (HSR) monochrome camera and another low spatial resolution (LSR) color camera for synthesizing the high-quality color image under low-light illumination conditions. The key problem is how to efficiently learn and fuse cross-camera information for improved presentation in such heterogeneous setup with domain gaps (e.g., color vs. monochrome, HSR vs. LSR). We have divided the end-to-end pipeline into three consecutive modularized sub-tasks, including the reference-based exposure compensation (RefEC), reference-based colorization (RefColor) and reference-based super-resolution (RefSR), to alleviate domain gaps and capture inter-camera dynamics between hybrid inputs. In each step, we leverage the powerful deep neural network (DNN) to respectively transfer and enhance the illuminative, spectral and spatial granularity in a data-driven way. Each module is first trained separately, and then jointly fine-tuned for robust and reliable performance. Experimental results have shown that our work provides the leading performance in synthetic content from popular test datasets when compared to existing algorithms, and offers appealing color reconstruction using real captured scenes from an industrial monochrome and a smartphone RGB cameras, in low-light color imaging application.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
Here, Y and UV represent the luminance and chrominance components in YUV color space that is widely adopted in image/video applications.
 
Literatur
1.
Zurück zum Zitat Pizer, S.M., et al.: Adaptive histogram equalization and its variations. Graph. Models Graph. Models Image Process. Comput. Vis. Graph. Image Process. 39, 355–368 (1987)CrossRef Pizer, S.M., et al.: Adaptive histogram equalization and its variations. Graph. Models Graph. Models Image Process. Comput. Vis. Graph. Image Process. 39, 355–368 (1987)CrossRef
2.
Zurück zum Zitat Coltuc, D., Bolon, P., Chassery, J.M.: Exact histogram specification. IEEE Trans. Image Process. 15, 1143–1152 (2006)CrossRef Coltuc, D., Bolon, P., Chassery, J.M.: Exact histogram specification. IEEE Trans. Image Process. 15, 1143–1152 (2006)CrossRef
3.
Zurück zum Zitat Ibrahim, H., Kong, N.S.P.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53, 1752–1758 (2007) Ibrahim, H., Kong, N.S.P.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53, 1752–1758 (2007)
4.
Zurück zum Zitat Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–128 (1977)CrossRef Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–128 (1977)CrossRef
5.
Zurück zum Zitat Wang, S., Zheng, J., Hu, H.M., Li, B.: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22, 3538–3548 (2013)CrossRef Wang, S., Zheng, J., Hu, H.M., Li, B.: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22, 3538–3548 (2013)CrossRef
6.
Zurück zum Zitat Fu, X., Zeng, D., Huang, Y., Zhang, X.P.S., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2782–2790 (2016) Fu, X., Zeng, D., Huang, Y., Zhang, X.P.S., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2782–2790 (2016)
7.
Zurück zum Zitat Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. arxiv abs/1808.04560 (2018) Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. arxiv abs/1808.04560 (2018)
8.
Zurück zum Zitat Zhang, Y., Di, X.G., Zhang, B., Wang, C.: Self-supervised image enhancement network: training with low light images only. arxiv abs/2002.11300 (2020) Zhang, Y., Di, X.G., Zhang, B., Wang, C.: Self-supervised image enhancement network: training with low light images only. arxiv abs/2002.11300 (2020)
9.
Zurück zum Zitat Wang, J., Tan, W., Niu, X., Yan, B.: RDGAN: retinex decomposition based adversarial learning for low-light enhancement. In: 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 1186–1191 (2019) Wang, J., Tan, W., Niu, X., Yan, B.: RDGAN: retinex decomposition based adversarial learning for low-light enhancement. In: 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 1186–1191 (2019)
10.
Zurück zum Zitat Brady, D.J., Pang, W., Li, H., Ma, Z., Tao, Y., Cao, X.: Parallel cameras. Optica 5, 127–137 (2018)CrossRef Brady, D.J., Pang, W., Li, H., Ma, Z., Tao, Y., Cao, X.: Parallel cameras. Optica 5, 127–137 (2018)CrossRef
11.
Zurück zum Zitat Cheng, M., et al.: A dual camera system for high spatiotemporal resolution video acquisition. IEEE Trans. Pattern Anal. Mach. Intell. (2020) Cheng, M., et al.: A dual camera system for high spatiotemporal resolution video acquisition. IEEE Trans. Pattern Anal. Mach. Intell. (2020)
12.
Zurück zum Zitat Zheng, H., Ji, M., Wang, H., Liu, Y., Fang, L.: Crossnet: an end-to-end reference-based super resolution network using cross-scale warping. In: ECCV (2018) Zheng, H., Ji, M., Wang, H., Liu, Y., Fang, L.: Crossnet: an end-to-end reference-based super resolution network using cross-scale warping. In: ECCV (2018)
14.
Zurück zum Zitat Morie, J., McCallum, K.: Handbook of Research on the Global Impacts and Roles of Immersive Media. Advances in Media, Entertainment, and the Arts. IGI Global (2019) Morie, J., McCallum, K.: Handbook of Research on the Global Impacts and Roles of Immersive Media. Advances in Media, Entertainment, and the Arts. IGI Global (2019)
15.
Zurück zum Zitat Robinson, S., Schmidt, J.: Fluorescent penetrant sensitivity and removability: what the eye can see, a fluorometer can measure. Mater. Eval. 42, 1029–1034 (1984) Robinson, S., Schmidt, J.: Fluorescent penetrant sensitivity and removability: what the eye can see, a fluorometer can measure. Mater. Eval. 42, 1029–1034 (1984)
16.
Zurück zum Zitat Bayer, B.E.: Color image array. US Patent 3971056 (1976) Bayer, B.E.: Color image array. US Patent 3971056 (1976)
17.
Zurück zum Zitat Wang, J.J., Xue, T., Barron, J.T., Chen, J.: Stereoscopic dark flash for low-light photography. In: 2019 IEEE International Conference on Computational Photography (ICCP), pp. 1–10 (2019) Wang, J.J., Xue, T., Barron, J.T., Chen, J.: Stereoscopic dark flash for low-light photography. In: 2019 IEEE International Conference on Computational Photography (ICCP), pp. 1–10 (2019)
18.
Zurück zum Zitat Trinidad, M.C., Martin-Brualla, R., Kainz, F., Kontkanen, J.: Multi-view image fusion. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4100–4109 (2019) Trinidad, M.C., Martin-Brualla, R., Kainz, F., Kontkanen, J.: Multi-view image fusion. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4100–4109 (2019)
19.
Zurück zum Zitat Dong, X., Li, W.: Shoot high-quality color images using dual-lens system with monochrome and color cameras. Neurocomputing 352, 22–32 (2019)CrossRef Dong, X., Li, W.: Shoot high-quality color images using dual-lens system with monochrome and color cameras. Neurocomputing 352, 22–32 (2019)CrossRef
20.
Zurück zum Zitat Chu, X., Zhang, B., Ma, H., Xu, R., Li, J., Li, Q.: Fast, accurate and lightweight super-resolution with neural architecture search. arXiv preprint arXiv:1901.07261 (2019) Chu, X., Zhang, B., Ma, H., Xu, R., Li, J., Li, Q.: Fast, accurate and lightweight super-resolution with neural architecture search. arXiv preprint arXiv:​1901.​07261 (2019)
21.
Zurück zum Zitat Wronski, B., et al.: Handheld multi-frame super-resolution. ACM Trans. Graph. (TOG) 38, 1–18 (2019)CrossRef Wronski, B., et al.: Handheld multi-frame super-resolution. ACM Trans. Graph. (TOG) 38, 1–18 (2019)CrossRef
22.
Zurück zum Zitat Godard, C., Matzen, K., Uyttendaele, M.: Deep burst denoising. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 538–554 (2018) Godard, C., Matzen, K., Uyttendaele, M.: Deep burst denoising. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 538–554 (2018)
23.
Zurück zum Zitat Ignatov, A., Van Gool, L., Timofte, R.: Replacing mobile camera ISP with a single deep learning model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 536–537 (2020) Ignatov, A., Van Gool, L., Timofte, R.: Replacing mobile camera ISP with a single deep learning model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 536–537 (2020)
24.
Zurück zum Zitat Dong, X., et al.: Fast efficient algorithm for enhancement of low lighting video. In: ICME (2011) Dong, X., et al.: Fast efficient algorithm for enhancement of low lighting video. In: ICME (2011)
25.
Zurück zum Zitat Li, L., Wang, R., Wang, W., Gao, W.: A low-light image enhancement method for both denoising and contrast enlarging. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 3730–3734 (2015) Li, L., Wang, R., Wang, W., Gao, W.: A low-light image enhancement method for both denoising and contrast enlarging. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 3730–3734 (2015)
26.
Zurück zum Zitat Kimmel, R., Elad, M., Shaked, D., Keshet, R., Sobel, I.: A variational framework for retinex. Int. J. Comput. Vision 52, 7–23 (2004)CrossRef Kimmel, R., Elad, M., Shaked, D., Keshet, R., Sobel, I.: A variational framework for retinex. Int. J. Comput. Vision 52, 7–23 (2004)CrossRef
27.
Zurück zum Zitat Fu, X., Zeng, D., Huang, Y., Ding, X., Zhang, X.P.S.: A variational framework for single low light image enhancement using bright channel prior. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 1085–1088 (2013) Fu, X., Zeng, D., Huang, Y., Ding, X., Zhang, X.P.S.: A variational framework for single low light image enhancement using bright channel prior. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 1085–1088 (2013)
28.
Zurück zum Zitat Guo, X., Li, Y., Ling, H.: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 982–993 (2017)MathSciNetCrossRef Guo, X., Li, Y., Ling, H.: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 982–993 (2017)MathSciNetCrossRef
29.
Zurück zum Zitat Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. In: British Machine Vision Conference (2018) Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. In: British Machine Vision Conference (2018)
30.
Zurück zum Zitat Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1780–1789 (2020) Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1780–1789 (2020)
31.
Zurück zum Zitat Iizuka, S., Simo-Serra, E., Ishikawa, H.: Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph. 35, 110:1–110:11 (2016) Iizuka, S., Simo-Serra, E., Ishikawa, H.: Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph. 35, 110:1–110:11 (2016)
32.
Zurück zum Zitat Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: ECCV (2016) Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: ECCV (2016)
33.
Zurück zum Zitat Zhao, J., Liu, L., Snoek, C.G.M., Han, J., Shao, L.: Pixel-level semantics guided image colorization. arxiv abs/1808.01597 (2018) Zhao, J., Liu, L., Snoek, C.G.M., Han, J., Shao, L.: Pixel-level semantics guided image colorization. arxiv abs/1808.01597 (2018)
34.
Zurück zum Zitat Su, J.W., Chu, H.K., Huang, J.B.: Instance-aware image colorization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020) Su, J.W., Chu, H.K., Huang, J.B.: Instance-aware image colorization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
35.
Zurück zum Zitat Zhang, R., et al.: Real-time user-guided image colorization with learned deep priors. ACM Trans. Graph. 36, 119:1–119:11 (2017) Zhang, R., et al.: Real-time user-guided image colorization with learned deep priors. ACM Trans. Graph. 36, 119:1–119:11 (2017)
36.
Zurück zum Zitat Xiao, Y., Zhou, P., Zheng, Y.: Interactive deep colorization using simultaneous global and local inputs. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1887–1891 (2019) Xiao, Y., Zhou, P., Zheng, Y.: Interactive deep colorization using simultaneous global and local inputs. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1887–1891 (2019)
37.
Zurück zum Zitat Dong, X., Li, W., Wang, X., Wang, Y.: Learning a deep convolutional network for colorization in monochrome-color dual-lens system. In: AAAI (2019) Dong, X., Li, W., Wang, X., Wang, Y.: Learning a deep convolutional network for colorization in monochrome-color dual-lens system. In: AAAI (2019)
38.
Zurück zum Zitat He, M., Chen, D., Liao, J., Sander, P.V., Yuan, L.: Deep exemplar-based colorization. ACM Trans. Graph. (TOG) 37, 1–16 (2018) He, M., Chen, D., Liao, J., Sander, P.V., Yuan, L.: Deep exemplar-based colorization. ACM Trans. Graph. (TOG) 37, 1–16 (2018)
39.
Zurück zum Zitat Zhang, B., et al.: Deep exemplar-based video colorization. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8044–8053 (2019) Zhang, B., et al.: Deep exemplar-based video colorization. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8044–8053 (2019)
40.
Zurück zum Zitat Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018) Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)
41.
Zurück zum Zitat Liu, D., Wen, B., Fan, Y., Loy, C.C., Huang, T.S.: Non-local recurrent network for image restoration. In: NeurIPS (2018) Liu, D., Wen, B., Fan, Y., Loy, C.C., Huang, T.S.: Non-local recurrent network for image restoration. In: NeurIPS (2018)
42.
Zurück zum Zitat Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018) Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018)
43.
Zurück zum Zitat Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: CVPR (2020) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: CVPR (2020)
44.
Zurück zum Zitat Barbastathis, G., Ozcan, A., Situ, G.: On the use of deep learning for computational imaging. Optica 6, 921–943 (2019)CrossRef Barbastathis, G., Ozcan, A., Situ, G.: On the use of deep learning for computational imaging. Optica 6, 921–943 (2019)CrossRef
45.
Zurück zum Zitat Ironi, R., Cohen-Or, D., Lischinski, D.: Colorization by example. In: Rendering Techniques, pp. 201–210. Citeseer (2005) Ironi, R., Cohen-Or, D., Lischinski, D.: Colorization by example. In: Rendering Techniques, pp. 201–210. Citeseer (2005)
46.
Zurück zum Zitat Gupta, R.K., Chia, A.Y.S., Rajan, D., Ng, E.S., Zhiyong, H.: Image colorization using similar images. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 369–378 (2012) Gupta, R.K., Chia, A.Y.S., Rajan, D., Ng, E.S., Zhiyong, H.: Image colorization using similar images. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 369–378 (2012)
47.
Zurück zum Zitat Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: Proceedings of International Conference on Computer Vision (ICCV) (2017) Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: Proceedings of International Conference on Computer Vision (ICCV) (2017)
48.
Zurück zum Zitat Jiang, H., Sun, D., Jampani, V., Yang, M.H., Learned-Miller, E.G., Kautz, J.: Super slomo: high quality estimation of multiple intermediate frames for video interpolation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9000–9008 (2018) Jiang, H., Sun, D., Jampani, V., Yang, M.H., Learned-Miller, E.G., Kautz, J.: Super slomo: high quality estimation of multiple intermediate frames for video interpolation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9000–9008 (2018)
49.
Zurück zum Zitat Bao, W., Lai, W.S., Zhang, X., Gao, Z., Yang, M.H.: MEMC-Net: motion estimation and motion compensation driven neural network for video interpolation and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. (2018) Bao, W., Lai, W.S., Zhang, X., Gao, Z., Yang, M.H.: MEMC-Net: motion estimation and motion compensation driven neural network for video interpolation and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. (2018)
50.
Zurück zum Zitat Haris, M., Shakhnarovich, G., Ukita, N.: Space-time-aware multi-resolution video enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020) Haris, M., Shakhnarovich, G., Ukita, N.: Space-time-aware multi-resolution video enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
51.
Zurück zum Zitat Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8934–8943 (2018) Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8934–8943 (2018)
52.
Zurück zum Zitat Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. (TOG) 36, 118 (2017)CrossRef Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. (TOG) 36, 118 (2017)CrossRef
53.
Zurück zum Zitat Chen, J., Paris, S., Durand, F.: Real-time edge-aware image processing with the bilateral grid. ACM Trans. Graph. (TOG) 26, 103-es (2007) Chen, J., Paris, S., Durand, F.: Real-time edge-aware image processing with the bilateral grid. ACM Trans. Graph. (TOG) 26, 103-es (2007)
55.
Zurück zum Zitat Schechner, Y.Y., Nayar, S.K., Belhumeur, P.N.: Multiplexing for optimal lighting. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007)CrossRef Schechner, Y.Y., Nayar, S.K., Belhumeur, P.N.: Multiplexing for optimal lighting. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1339–1354 (2007)CrossRef
56.
Zurück zum Zitat Jeon, H.G., Lee, J.Y., Im, S., Ha, H., Kweon, I.S.: Stereo matching with color and monochrome cameras in low-light conditions. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4086–4094 (2016) Jeon, H.G., Lee, J.Y., Im, S., Ha, H., Kweon, I.S.: Stereo matching with color and monochrome cameras in low-light conditions. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4086–4094 (2016)
57.
Zurück zum Zitat Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2015) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2015)
59.
Zurück zum Zitat Schwartz, E., Giryes, R., Bronstein, A.M.: DeepISP: toward learning an end-to-end image processing pipeline. IEEE Trans. Image Process. 28, 912–923 (2019)MathSciNetCrossRef Schwartz, E., Giryes, R., Bronstein, A.M.: DeepISP: toward learning an end-to-end image processing pipeline. IEEE Trans. Image Process. 28, 912–923 (2019)MathSciNetCrossRef
60.
Zurück zum Zitat Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, pp. 277–280 (2002) Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, pp. 277–280 (2002)
61.
Zurück zum Zitat Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
62.
Zurück zum Zitat Chen, C., Xiong, Z., Tian, X., Zha, Z.J., Wu, F.: Camera lens super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1652–1660 (2019) Chen, C., Xiong, Z., Tian, X., Zha, Z.J., Wu, F.: Camera lens super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1652–1660 (2019)
Metadaten
Titel
Low-Light Color Imaging via Dual Camera Acquisition
verfasst von
Peiyao Guo
Zhan Ma
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-69532-3_10

Premium Partner