Skip to main content

2020 | OriginalPaper | Buchkapitel

Fast Approximate Light Field Volume Rendering: Using Volume Data to Improve Light Field Synthesis via Convolutional Neural Networks

verfasst von : Seán Bruton, David Ganter, Michael Manzke

Erschienen in: Computer Vision, Imaging and Computer Graphics Theory and Applications

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Volume visualization pipelines have the potential to be improved by the use of light field display technology, allowing enhanced perceptual qualities. However, these displays will require a significant increase in pixels to be rendered at interactive rates. Volume rendering makes use of ray-tracing techniques, which makes this resolution increase challenging for modest hardware. We demonstrate in this work an approach to synthesize the majority of the viewpoints in the light field using a small set of rendered viewpoints via a convolutional neural network. We show that synthesis performance can be further improved by allowing the network access to the volume data itself. To perform this efficiently, we propose a range of approaches and evaluate them against two datasets collected for this task. These approaches all improve synthesis performance and avoid the use of expensive 3D convolutional operations. With this approach, we improve light field volume rendering times by a factor of 8 for our test case.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
5.
Zurück zum Zitat Bruton, S., Ganter, D., Manzke, M.: Synthesising light field volumetric visualizations in real-time using a compressed volume representation. In: Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP. pp. 96–105. SciTePress (2019). https://doi.org/10.5220/0007407200960105 Bruton, S., Ganter, D., Manzke, M.: Synthesising light field volumetric visualizations in real-time using a compressed volume representation. In: Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP. pp. 96–105. SciTePress (2019). https://​doi.​org/​10.​5220/​0007407200960105​
7.
8.
Zurück zum Zitat Engelmann, F., Kontogianni, T., Hermans, A., Leibe, B.: Exploring spatial context for 3D semantic segmentation of point clouds. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). pp. 716–724 (2017). https://doi.org/10.1109/ICCVW.2017.90 Engelmann, F., Kontogianni, T., Hermans, A., Leibe, B.: Exploring spatial context for 3D semantic segmentation of point clouds. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). pp. 716–724 (2017). https://​doi.​org/​10.​1109/​ICCVW.​2017.​90
11.
Zurück zum Zitat Fishman, E.K., Ney, D.R., Heath, D.G., Corl, F.M., Horton, K.M., Johnson, P.T.: Volume rendering versus maximum intensity projection in CT angiography: what works best, when, and why. Radiographics: A Review Publication of the Radiological Society of North America, Inc 26(3), 905–922 (2006). https://doi.org/10.1148/rg.263055186CrossRef Fishman, E.K., Ney, D.R., Heath, D.G., Corl, F.M., Horton, K.M., Johnson, P.T.: Volume rendering versus maximum intensity projection in CT angiography: what works best, when, and why. Radiographics: A Review Publication of the Radiological Society of North America, Inc 26(3), 905–922 (2006). https://​doi.​org/​10.​1148/​rg.​263055186CrossRef
12.
Zurück zum Zitat Hadwiger, M., Kratz, A., Sigg, C., Bühler, K.: GPU-accelerated deep shadow maps for direct volume rendering. In: Proceedings of the 21st ACM SIGGRAPH/EUROGRAPHICS Symposium on Graphics Hardware, pp. 49–52. GH 2006, ACM, New York, NY, USA (2006). https://doi.org/10.1145/1283900.1283908 Hadwiger, M., Kratz, A., Sigg, C., Bühler, K.: GPU-accelerated deep shadow maps for direct volume rendering. In: Proceedings of the 21st ACM SIGGRAPH/EUROGRAPHICS Symposium on Graphics Hardware, pp. 49–52. GH 2006, ACM, New York, NY, USA (2006). https://​doi.​org/​10.​1145/​1283900.​1283908
19.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc., New York (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc., New York (2012)
20.
Zurück zum Zitat Lacroute, P., Levoy, M.: Fast volume rendering using a shear-warp factorization of the viewing transformation. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 451–458. SIGGRAPH 1994, ACM, New York, NY, USA (1994). https://doi.org/10.1145/192161.192283 Lacroute, P., Levoy, M.: Fast volume rendering using a shear-warp factorization of the viewing transformation. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 451–458. SIGGRAPH 1994, ACM, New York, NY, USA (1994). https://​doi.​org/​10.​1145/​192161.​192283
23.
Zurück zum Zitat Li, Y., Pirk, S., Su, H., Qi, C.R., Guibas, L.J.: FPNN: field probing neural networks for 3D data. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 307–315. Curran Associates, Inc., New York (2016) Li, Y., Pirk, S., Su, H., Qi, C.R., Guibas, L.J.: FPNN: field probing neural networks for 3D data. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 307–315. Curran Associates, Inc., New York (2016)
31.
Zurück zum Zitat Park, E., Yang, J., Yumer, E., Ceylan, D., Berg, A.C.: Transformation-grounded image generation network for novel 3D view synthesis. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 702–711 (2017). https://doi.org/10.1109/CVPR.2017.82 Park, E., Yang, J., Yumer, E., Ceylan, D., Berg, A.C.: Transformation-grounded image generation network for novel 3D view synthesis. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 702–711 (2017). https://​doi.​org/​10.​1109/​CVPR.​2017.​82
32.
Zurück zum Zitat Philips, S., Hlawitschka, M., Scheuermann, G.: Slice-based visualization of brain fiber bundles - a lic-based approach. In: Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP, pp. 281–288. SciTePress (2018). https://doi.org/10.5220/0006619402810288 Philips, S., Hlawitschka, M., Scheuermann, G.: Slice-based visualization of brain fiber bundles - a lic-based approach. In: Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: IVAPP, pp. 281–288. SciTePress (2018). https://​doi.​org/​10.​5220/​0006619402810288​
33.
34.
Zurück zum Zitat Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.J.: Volumetric and multi-view CNNs for object classification on 3D data. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5648–5656 (2016). https://doi.org/10.1109/CVPR.2016.609 Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.J.: Volumetric and multi-view CNNs for object classification on 3D data. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5648–5656 (2016). https://​doi.​org/​10.​1109/​CVPR.​2016.​609
35.
Zurück zum Zitat Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 5099–5108. Curran Associates, Inc., New York (2017) Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 5099–5108. Curran Associates, Inc., New York (2017)
39.
40.
Zurück zum Zitat Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.: Multi-view convolutional neural networks for 3D shape recognition. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 945–953. ICCV 2015, IEEE Computer Society, Washington, DC, USA (2015). https://doi.org/10.1109/ICCV.2015.114 Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.: Multi-view convolutional neural networks for 3D shape recognition. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 945–953. ICCV 2015, IEEE Computer Society, Washington, DC, USA (2015). https://​doi.​org/​10.​1109/​ICCV.​2015.​114
43.
Zurück zum Zitat Wang, P., Li, W., Gao, Z., Zhang, Y., Tang, C., Ogunbona, P.: Scene flow to action map: a new representation for RGB-D based action recognition with convolutional neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 416–425 (2017). https://doi.org/10.1109/CVPR.2017.52 Wang, P., Li, W., Gao, Z., Zhang, Y., Tang, C., Ogunbona, P.: Scene flow to action map: a new representation for RGB-D based action recognition with convolutional neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 416–425 (2017). https://​doi.​org/​10.​1109/​CVPR.​2017.​52
Metadaten
Titel
Fast Approximate Light Field Volume Rendering: Using Volume Data to Improve Light Field Synthesis via Convolutional Neural Networks
verfasst von
Seán Bruton
David Ganter
Michael Manzke
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-41590-7_14