Skip to main content

2018 | OriginalPaper | Buchkapitel

Underwater Light Field Depth Map Restoration Using Deep Convolutional Neural Fields

verfasst von : Huimin Lu, Yujie Li, Hyoungseop Kim, Seiichi Serikawa

Erschienen in: Artificial Intelligence and Robotics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Underwater optical images are usually influenced by low lighting, high turbidity scattering and wavelength absorption. In order to solve these issues, a great deal of work has been used to improve the quality of underwater images. Most of them used the high-intensity LED for lighting to obtain the high contrast images. However, in high turbidity water, high-intensity LED causes strong scattering and absorption. In this paper, we firstly propose a light field imaging approach for solving underwater depth map estimation problems in low-intensity lighting environment. As a solution, we tackle the problem of de-scattering from light field images by using deep convolutional neural fields in depth estimation. Experimental results show the effectiveness of the proposed method through challenging real world underwater imaging.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Ligten, R.: Influence of photographic film on wavefront reconstruction. J. Opt. Soc. Am. 56, 1009–1014 (1966)CrossRef Ligten, R.: Influence of photographic film on wavefront reconstruction. J. Opt. Soc. Am. 56, 1009–1014 (1966)CrossRef
5.
Zurück zum Zitat Dansereau, D., Bongiorno, D., Pizarro, O., Williams, S.: Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter. In: Proceedings SPIE Computational Imaging XI, Feb 2013 Dansereau, D., Bongiorno, D., Pizarro, O., Williams, S.: Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter. In: Proceedings SPIE Computational Imaging XI, Feb 2013
6.
Zurück zum Zitat Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graphics 24(3), 765–776 (2005)CrossRef Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graphics 24(3), 765–776 (2005)CrossRef
7.
Zurück zum Zitat Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graphics 26(3), 1–10 (2007)CrossRef Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graphics 26(3), 1–10 (2007)CrossRef
8.
Zurück zum Zitat Liang, C., Lin, T., Wong, B., Liu, C., Chen, H.: Programmable aperture photography: multiplexed light field acquisition. ACM Trans. Graphics 27(3), 1–10 (2008)CrossRef Liang, C., Lin, T., Wong, B., Liu, C., Chen, H.: Programmable aperture photography: multiplexed light field acquisition. ACM Trans. Graphics 27(3), 1–10 (2008)CrossRef
9.
Zurück zum Zitat Taniguchi, Y., Agrawal, A., Veeraraghavan, A., Ramalingam, S., Raskar, R.: Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering. ACM Trans. Graphics 29(6), 1–10 (2010)CrossRef Taniguchi, Y., Agrawal, A., Veeraraghavan, A., Ramalingam, S., Raskar, R.: Axial-cones: modeling spherical catadioptric cameras for wide-angle light field rendering. ACM Trans. Graphics 29(6), 1–10 (2010)CrossRef
10.
Zurück zum Zitat Ng, R., Levoy, M., Bredif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Stanford University Computer Science and Technical Report, vol. 2, no. 11, 2005 Ng, R., Levoy, M., Bredif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Stanford University Computer Science and Technical Report, vol. 2, no. 11, 2005
11.
Zurück zum Zitat Georgiev, T., Lumsdaine, A.: Reducing plenoptic camera artifacts. In: Computer Graphics Forum, vol. 29, no. 6, pp. 1955–1968 (2010) Georgiev, T., Lumsdaine, A.: Reducing plenoptic camera artifacts. In: Computer Graphics Forum, vol. 29, no. 6, pp. 1955–1968 (2010)
12.
Zurück zum Zitat Lu, H., Li, B., Zhu, J., Li, Y., Li, Y., Xu, X., He, L., Li, X., Li, J., Serikawa, S.: Wound intensity correction and segmentation with convolutional neural networks. Concurr. Comput. Pract. Exp. 27(9), 1–10 (2017) Lu, H., Li, B., Zhu, J., Li, Y., Li, Y., Xu, X., He, L., Li, X., Li, J., Serikawa, S.: Wound intensity correction and segmentation with convolutional neural networks. Concurr. Comput. Pract. Exp. 27(9), 1–10 (2017)
13.
Zurück zum Zitat Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4D light fields. In Proceedings of CVPR2012, pp. 41–48 (2012) Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4D light fields. In Proceedings of CVPR2012, pp. 41–48 (2012)
14.
Zurück zum Zitat Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In Proceedings of IEEE ICCV2013, pp. 673–680 (2013) Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In Proceedings of IEEE ICCV2013, pp. 673–680 (2013)
15.
Zurück zum Zitat Tao, M., Wang, T., Malik, J., Ramamoorthi, R.: Depth estimation for glossy surfaces with light-field cameras. In: Workshop on Light Fields for Computer Vision, ECCV (2014) Tao, M., Wang, T., Malik, J., Ramamoorthi, R.: Depth estimation for glossy surfaces with light-field cameras. In: Workshop on Light Fields for Computer Vision, ECCV (2014)
16.
Zurück zum Zitat Jeon, H., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y., Kweon, I.: Accurate depth map estimation from a lenslet light field camera. In: Proceedings of CVPR2015, pp. 1547–1555 (2015) Jeon, H., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y., Kweon, I.: Accurate depth map estimation from a lenslet light field camera. In: Proceedings of CVPR2015, pp. 1547–1555 (2015)
17.
Zurück zum Zitat Wang, W., Efros, A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In Proceedings of ICCV2015, pp. 3487–3495 (2015) Wang, W., Efros, A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In Proceedings of ICCV2015, pp. 3487–3495 (2015)
18.
Zurück zum Zitat Williem, W., Park, I.: Robust light field depth estimation for noisy scene with occlusion. In: Proceedings Of CVPR2016, pp. 4396–4404 (2010) Williem, W., Park, I.: Robust light field depth estimation for noisy scene with occlusion. In: Proceedings Of CVPR2016, pp. 4396–4404 (2010)
19.
Zurück zum Zitat Kalantari, N., Wang, T., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. In Proceedings of SIGGRAPH Asia (2016) Kalantari, N., Wang, T., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. In Proceedings of SIGGRAPH Asia (2016)
20.
Zurück zum Zitat Wang, T., Srikanth, M., Ramamoorthi, R.: Depth from semi-calibrated stereo and defocus. In: Proceedings of CVPR (2016) Wang, T., Srikanth, M., Ramamoorthi, R.: Depth from semi-calibrated stereo and defocus. In: Proceedings of CVPR (2016)
21.
Zurück zum Zitat Wang, T., Chandraker, M., Efros, A., Ramamoorthi, R.: SVBRDF-invariant shape and reflectance estimation from light-field cameras. In Proceedings of CVPR (2016) Wang, T., Chandraker, M., Efros, A., Ramamoorthi, R.: SVBRDF-invariant shape and reflectance estimation from light-field cameras. In Proceedings of CVPR (2016)
22.
Zurück zum Zitat Diebel, J., Thrun, S.: An application of Markov radom fields to range sensing. Adv. Neural. Inf. Process. Syst. 18, 291 (2005) Diebel, J., Thrun, S.: An application of Markov radom fields to range sensing. Adv. Neural. Inf. Process. Syst. 18, 291 (2005)
23.
Zurück zum Zitat Huhle, B., Fleck, S., Schilling, A.: Integrating 3D time-of-flight camera data and high resolution images for 3DTV applications. In: Proceedings of 3DTV, pp. 1–4 (2007) Huhle, B., Fleck, S., Schilling, A.: Integrating 3D time-of-flight camera data and high resolution images for 3DTV applications. In: Proceedings of 3DTV, pp. 1–4 (2007)
24.
Zurück zum Zitat Garro, V., Zanuttigh, P., Cortelazzo, G.: A new super resolution technique for range data. In Proceedings of Associazione Gruppo Telecomunicazionie Tecnologie dell Informazione (2009) Garro, V., Zanuttigh, P., Cortelazzo, G.: A new super resolution technique for range data. In Proceedings of Associazione Gruppo Telecomunicazionie Tecnologie dell Informazione (2009)
25.
Zurück zum Zitat Yang, Q., Tan, K., Culbertson, B., Apostolopoulos, J.: Fusion of active and passive sensors for fast 3D capture. In Proceedings of IEEE International Workshop on Multimedia Signal Processing, pp. 69–74 (2010) Yang, Q., Tan, K., Culbertson, B., Apostolopoulos, J.: Fusion of active and passive sensors for fast 3D capture. In Proceedings of IEEE International Workshop on Multimedia Signal Processing, pp. 69–74 (2010)
26.
Zurück zum Zitat Zhu, J., Wang, L., Gao, J., Yang, R.: Spatial-temporal fusion for high accuracy depth maps using dynamic MRFs. IEEE Trans Pattern Anal. Mach. Intell. 32(5), 899–909 (2010)CrossRef Zhu, J., Wang, L., Gao, J., Yang, R.: Spatial-temporal fusion for high accuracy depth maps using dynamic MRFs. IEEE Trans Pattern Anal. Mach. Intell. 32(5), 899–909 (2010)CrossRef
27.
Zurück zum Zitat He, K., Sun, J., Tang, X.: Guided image filtering. In: Proceedings of ECCV, pp. 1–14 (2010) He, K., Sun, J., Tang, X.: Guided image filtering. In: Proceedings of ECCV, pp. 1–14 (2010)
28.
Zurück zum Zitat Lu, J., Min, D., Pahwa, R., Do, M.: A revisit to MRF-based depth map super-resolution and enhancement. In: Proceedings of IEEE ICASSP, pp. 985–988 (2011) Lu, J., Min, D., Pahwa, R., Do, M.: A revisit to MRF-based depth map super-resolution and enhancement. In: Proceedings of IEEE ICASSP, pp. 985–988 (2011)
29.
Zurück zum Zitat Park, J., Kim, H., Tai, Y., Brown, M., Kweon, I.: High quality depth map upsampling for 3D-TOF cameras. In: Proceedings of ICCV, pp. 1623–1630 (2011) Park, J., Kim, H., Tai, Y., Brown, M., Kweon, I.: High quality depth map upsampling for 3D-TOF cameras. In: Proceedings of ICCV, pp. 1623–1630 (2011)
30.
Zurück zum Zitat Aodha, O., Campbell, N., Nair, A., Brostow, G.: Patch based synthesis for single depth image super-resolution. In: Proceedings of ECCV, pp. 71–84 (2012) Aodha, O., Campbell, N., Nair, A., Brostow, G.: Patch based synthesis for single depth image super-resolution. In: Proceedings of ECCV, pp. 71–84 (2012)
31.
Zurück zum Zitat Min, D., Lu, J., Do, M.: Depth video enhancement based on joint global mode filtering. IEEE Trans. Image Process. 21(3), 1176–1190 (2012)CrossRefMathSciNetMATH Min, D., Lu, J., Do, M.: Depth video enhancement based on joint global mode filtering. IEEE Trans. Image Process. 21(3), 1176–1190 (2012)CrossRefMathSciNetMATH
32.
Zurück zum Zitat Lu, J., Shi, K., Min, D., Lin, L., Do, M.: Cross-based local multipoint filtering. In Proceedings of CVPR, pp. 430–437 (2012) Lu, J., Shi, K., Min, D., Lin, L., Do, M.: Cross-based local multipoint filtering. In Proceedings of CVPR, pp. 430–437 (2012)
33.
Zurück zum Zitat Ferstl, D., Reinbacher, C., Ranftl, R., Ruther, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of ICCV, pp. 993–1000 (2013) Ferstl, D., Reinbacher, C., Ranftl, R., Ruther, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of ICCV, pp. 993–1000 (2013)
34.
Zurück zum Zitat Liu, M., Tuzel, O., Taguchi, Y.: Joint geodesic upsampling of depth images. In Proceedings of CVPR, pp. 169–176 (2013) Liu, M., Tuzel, O., Taguchi, Y.: Joint geodesic upsampling of depth images. In Proceedings of CVPR, pp. 169–176 (2013)
35.
Zurück zum Zitat Serikawa, S., Lu, H.: Underwater image dehazing using joint trilateral filter. Comput. Electr. Eng. 40(1), 41–50 (2014)CrossRef Serikawa, S., Lu, H.: Underwater image dehazing using joint trilateral filter. Comput. Electr. Eng. 40(1), 41–50 (2014)CrossRef
36.
Zurück zum Zitat Lu, H., Li, Y., Zhang, L., Serikawa, S.: Contrast enhancement for images in turbid water. J. Opt. Soc. Am. 32(5), 886–893 (2015)CrossRef Lu, H., Li, Y., Zhang, L., Serikawa, S.: Contrast enhancement for images in turbid water. J. Opt. Soc. Am. 32(5), 886–893 (2015)CrossRef
37.
Zurück zum Zitat Liu, F., Shen, C., Lin, G., Reid, I.: Learning depth from single monocular images using deep convolutional neural fields. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 2024–2038 (2016)CrossRef Liu, F., Shen, C., Lin, G., Reid, I.: Learning depth from single monocular images using deep convolutional neural fields. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 2024–2038 (2016)CrossRef
38.
39.
Zurück zum Zitat Lu, H., Zhang, Y., Li, Y., Zhou, Q., Tadoh, R., Uemura, T., Kim, H., Serikawa, S.: Depth map reconstruction for underwater Kinect camera using inpainting and local image mode filtering. IEEE Access 5(1), 7115–7122 (2017)CrossRef Lu, H., Zhang, Y., Li, Y., Zhou, Q., Tadoh, R., Uemura, T., Kim, H., Serikawa, S.: Depth map reconstruction for underwater Kinect camera using inpainting and local image mode filtering. IEEE Access 5(1), 7115–7122 (2017)CrossRef
Metadaten
Titel
Underwater Light Field Depth Map Restoration Using Deep Convolutional Neural Fields
verfasst von
Huimin Lu
Yujie Li
Hyoungseop Kim
Seiichi Serikawa
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-69877-9_33