Skip to main content

2019 | OriginalPaper | Buchkapitel

Fast Omnidirectional Depth Densification

verfasst von : Hyeonjoong Jang, Daniel S. Jeon, Hyunho Ha, Min H. Kim

Erschienen in: Advances in Visual Computing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Omnidirectional cameras are commonly equipped with fisheye lenses to capture 360-degree visual information, and severe spherical projective distortion occurs when a 360-degree image is stored as a two-dimensional image array. As a consequence, traditional depth estimation methods are not directly applicable to omnidirectional cameras. Dense depth estimation for omnidirectional imaging has been achieved by applying several offline processes, such as patch-matching, optical flow, and convolutional propagation filtering, resulting in additional heavy computation. No dense depth estimation for real-time applications is available yet. In response, we propose an efficient depth densification method designed for omnidirectional imaging to achieve 360-degree dense depth video with an omnidirectional camera. First, we compute the sparse depth estimates using a conventional simultaneous localization and mapping (SLAM) method, and then use these estimates as input to a depth densification method. We propose a novel densification method using the spherical pull-push method by devising a joint spherical pyramid for color and depth, based on multi-level icosahedron subdivision surfaces. This allows us to propagate the sparse depth continuously over 360-degree angles efficiently in an edge-aware manner. The results demonstrate that our real-time densification method is comparable to state-of-the-art offline methods in terms of per-pixel depth accuracy. Combining our depth densification with a conventional SLAM allows us to capture real-time 360-degree RGB-D video with a single omnidirectional camera.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
Note that contrary to the labeling convention of the image pyramid, we label each level from the coarse to fine level in ascending order, following the subdivision labeling convention.
 
Literatur
2.
Zurück zum Zitat Bunschoten, R., Krose, B.: Robust scene reconstruction from an omnidirectional vision system. IEEE Trans. Robot. Autom. 19(2), 351–357 (2003)CrossRef Bunschoten, R., Krose, B.: Robust scene reconstruction from an omnidirectional vision system. IEEE Trans. Robot. Autom. 19(2), 351–357 (2003)CrossRef
3.
Zurück zum Zitat Caruso, D., Engel, J., Cremers, D.: Large-scale direct slam for omnidirectional cameras. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 141–148. IEEE (2015) Caruso, D., Engel, J., Cremers, D.: Large-scale direct slam for omnidirectional cameras. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 141–148. IEEE (2015)
4.
Zurück zum Zitat Chen, Z., Badrinarayanan, V., Drozdov, G., Rabinovich, A.: Estimating depth from RGB and sparse sensing. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 167–182 (2018) Chen, Z., Badrinarayanan, V., Drozdov, G., Rabinovich, A.: Estimating depth from RGB and sparse sensing. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 167–182 (2018)
5.
Zurück zum Zitat Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)CrossRef Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)CrossRef
6.
Zurück zum Zitat Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. Siggraph 96, 43–54 (1996) Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. Siggraph 96, 43–54 (1996)
7.
Zurück zum Zitat Guan, H., Smith, W.A.: Brisks: binary features for spherical images on a geodesic grid. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4516–4524 (2017) Guan, H., Smith, W.A.: Brisks: binary features for spherical images on a geodesic grid. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4516–4524 (2017)
8.
Zurück zum Zitat Hawe, S., Kleinsteuber, M., Diepold, K.: Dense disparity maps from sparse disparity measurements. In: 2011 International Conference on Computer Vision, pp. 2126–2133. IEEE (2011) Hawe, S., Kleinsteuber, M., Diepold, K.: Dense disparity maps from sparse disparity measurements. In: 2011 International Conference on Computer Vision, pp. 2126–2133. IEEE (2011)
9.
Zurück zum Zitat He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)CrossRef He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)CrossRef
10.
Zurück zum Zitat Holynski, A., Kopf, J.: Fast depth densification for occlusion-aware augmented reality. In: SIGGRAPH Asia 2018 Technical Papers, p. 194. ACM (2018) Holynski, A., Kopf, J.: Fast depth densification for occlusion-aware augmented reality. In: SIGGRAPH Asia 2018 Technical Papers, p. 194. ACM (2018)
11.
Zurück zum Zitat Huang, J., Chen, Z., Ceylan, D., Jin, H.: 6-DOF VR videos with a single 360-camera. In: 2017 IEEE Virtual Reality (VR), pp. 37–44. IEEE (2017) Huang, J., Chen, Z., Ceylan, D., Jin, H.: 6-DOF VR videos with a single 360-camera. In: 2017 IEEE Virtual Reality (VR), pp. 37–44. IEEE (2017)
13.
Zurück zum Zitat Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., Nashashibi, F.: Sparse and dense data with CNNs: depth completion and semantic segmentation. In: 2018 International Conference on 3D Vision (3DV), pp. 52–60. IEEE (2018) Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., Nashashibi, F.: Sparse and dense data with CNNs: depth completion and semantic segmentation. In: 2018 International Conference on 3D Vision (3DV), pp. 52–60. IEEE (2018)
14.
Zurück zum Zitat Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. In: ACM Transactions on Graphics (ToG), vol. 26, p. 96. ACM (2007) Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. In: ACM Transactions on Graphics (ToG), vol. 26, p. 96. ACM (2007)
15.
Zurück zum Zitat Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: ACM Transactions on Graphics (ToG), vol. 23, pp. 689–694. ACM (2004) Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: ACM Transactions on Graphics (ToG), vol. 23, pp. 689–694. ACM (2004)
17.
Zurück zum Zitat Li, S., Fukumori, K.: Spherical stereo for the construction of immersive VR environment. In: IEEE Proceedings, VR 2005, Virtual Reality, pp. 217–222. IEEE (2005) Li, S., Fukumori, K.: Spherical stereo for the construction of immersive VR environment. In: IEEE Proceedings, VR 2005, Virtual Reality, pp. 217–222. IEEE (2005)
18.
Zurück zum Zitat Lin, H.S., Chang, C.C., Chang, H.Y., Chuang, Y.Y., Lin, T.L., Ouhyoung, M.: A low-cost portable polycamera for stereoscopic \(360^{\circ }\) imaging. IEEE Trans. Circ. Syst. Video Technol. 29(4), 915–929 (2018)CrossRef Lin, H.S., Chang, C.C., Chang, H.Y., Chuang, Y.Y., Lin, T.L., Ouhyoung, M.: A low-cost portable polycamera for stereoscopic \(360^{\circ }\) imaging. IEEE Trans. Circ. Syst. Video Technol. 29(4), 915–929 (2018)CrossRef
19.
Zurück zum Zitat Mal, F., Karaman, S.: Sparse-to-dense: depth prediction from sparse depth samples and a single image. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. IEEE (2018) Mal, F., Karaman, S.: Sparse-to-dense: depth prediction from sparse depth samples and a single image. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. IEEE (2018)
20.
Zurück zum Zitat Muja, M., Lowe, D.G.: Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2227–2240 (2014)CrossRef Muja, M., Lowe, D.G.: Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2227–2240 (2014)CrossRef
21.
Zurück zum Zitat Shen, S.: Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes. IEEE Trans. Image Process. 22(5), 1901–1914 (2013)MathSciNetCrossRef Shen, S.: Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes. IEEE Trans. Image Process. 22(5), 1901–1914 (2013)MathSciNetCrossRef
22.
Zurück zum Zitat Shirley, P., Chiu, K.: A low distortion map between disk and square. J. Graph. Tools 2(3), 45–52 (1997)CrossRef Shirley, P., Chiu, K.: A low distortion map between disk and square. J. Graph. Tools 2(3), 45–52 (1997)CrossRef
23.
Zurück zum Zitat Zhao, Q., Feng, W., Wan, L., Zhang, J.: SPHORB: a fast and robust binary feature on the sphere. Int. J. Comput. Vision 113(2), 143–159 (2015)MathSciNetCrossRef Zhao, Q., Feng, W., Wan, L., Zhang, J.: SPHORB: a fast and robust binary feature on the sphere. Int. J. Comput. Vision 113(2), 143–159 (2015)MathSciNetCrossRef
24.
Zurück zum Zitat Zhu, Z.: Omnidirectional stereo vision. In: Proceedings of the Workshop on Omnidirectional Vision, Budapest, Hungary (2001) Zhu, Z.: Omnidirectional stereo vision. In: Proceedings of the Workshop on Omnidirectional Vision, Budapest, Hungary (2001)
Metadaten
Titel
Fast Omnidirectional Depth Densification
verfasst von
Hyeonjoong Jang
Daniel S. Jeon
Hyunho Ha
Min H. Kim
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-33720-9_53

Premium Partner