Skip to main content

2019 | OriginalPaper | Buchkapitel

UnrealGT: Using Unreal Engine to Generate Ground Truth Datasets

verfasst von : Thomas Pollok, Lorenz Junglas, Boitumelo Ruf, Arne Schumann

Erschienen in: Advances in Visual Computing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Large amounts of data have become an essential requirement in the development of modern computer vision algorithms, e.g. the training of neural networks. Due to data protection laws, overflight permissions for UAVs or expensive equipment, data collection is often a costly and time-consuming task. Especially, if the ground truth is generated by manually annotating the collected data. By means of synthetic data generation, large amounts of image- and metadata can be extracted directly from a virtual scene, which in turn can be customized to meet the specific needs of the algorithm or the use-case. Furthermore, the use of virtual objects avoids problems that might arise due to data protection issues and does not require the use of expensive sensors. In this work we propose a framework for synthetic test data generation utilizing the Unreal Engine. The Unreal Engine provides a simulation environment that allows one to simulate complex situations in a virtual world, such as data acquisition with UAVs or autonomous diving. However, our process is agnostic to the computer vision task for which the data is generated and, thus, can be used to create generic datasets. We evaluate our framework by generating synthetic test data, with which a CNN for object detection as well as a V-SLAM algorithm are trained and evaluated. The evaluation shows that our generated synthetic data can be used as an alternative to real data.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Bridson, R.: Fast Poisson disk sampling in arbitrary dimensions. In: Proceedings of ACM SIGGRAPH Sketches (2007) Bridson, R.: Fast Poisson disk sampling in arbitrary dimensions. In: Proceedings of ACM SIGGRAPH Sketches (2007)
2.
Zurück zum Zitat Dang, Q., Yin, J., Wang, B., Zheng, W.: Deep learning based 2D human pose estimation: a survey. Tsinghua Sci. Technol. 24(6), 663–676 (2019)CrossRef Dang, Q., Yin, J., Wang, B., Zheng, W.: Deep learning based 2D human pose estimation: a survey. Tsinghua Sci. Technol. 24(6), 663–676 (2019)CrossRef
3.
Zurück zum Zitat Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of Annual Conference on Robot Learning, pp. 1–16 (2017) Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of Annual Conference on Robot Learning, pp. 1–16 (2017)
4.
Zurück zum Zitat Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006)CrossRef Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006)CrossRef
5.
Zurück zum Zitat Eberly, D.: 3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics. CRC Press, Boca Raton (2006) Eberly, D.: 3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics. CRC Press, Boca Raton (2006)
6.
Zurück zum Zitat Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)CrossRef Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)CrossRef
7.
Zurück zum Zitat Guo, Y., Liu, Y., Georgiou, T., Lew, M.S.: A review of semantic segmentation using deep neural networks. Int. J. Multimedia Inf. Retrieval 7(2), 87–93 (2018)CrossRef Guo, Y., Liu, Y., Georgiou, T., Lew, M.S.: A review of semantic segmentation using deep neural networks. Int. J. Multimedia Inf. Retrieval 7(2), 87–93 (2018)CrossRef
8.
Zurück zum Zitat Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3D object representations for fine-grained categorization. In: Proceedings of IEEE International Conference on Computer Vision Workshops, pp. 554–561 (2013) Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3D object representations for fine-grained categorization. In: Proceedings of IEEE International Conference on Computer Vision Workshops, pp. 554–561 (2013)
9.
Zurück zum Zitat Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Proceedings of European Conference on Computer Vision, pp. 740–755 (2014) Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Proceedings of European Conference on Computer Vision, pp. 740–755 (2014)
11.
Zurück zum Zitat Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)CrossRef Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)CrossRef
12.
Zurück zum Zitat Qiu, W., Yuille, A.: UnrealCV: connecting computer vision to unreal engine. In: Proceedings of European Conference on Computer Vision, pp. 909–916 (2016) Qiu, W., Yuille, A.: UnrealCV: connecting computer vision to unreal engine. In: Proceedings of European Conference on Computer Vision, pp. 909–916 (2016)
15.
Zurück zum Zitat Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: high-fidelity visual and physical simulation for autonomous vehicles. In: Proceedings of Field and Service Robotics, pp. 621–635 (2018) Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: high-fidelity visual and physical simulation for autonomous vehicles. In: Proceedings of Field and Service Robotics, pp. 621–635 (2018)
16.
Zurück zum Zitat Tremblay, J., et al.: Training deep networks with synthetic data: bridging the reality gap by domain randomization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1082–10828 (2018) Tremblay, J., et al.: Training deep networks with synthetic data: bridging the reality gap by domain randomization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1082–10828 (2018)
17.
Metadaten
Titel
UnrealGT: Using Unreal Engine to Generate Ground Truth Datasets
verfasst von
Thomas Pollok
Lorenz Junglas
Boitumelo Ruf
Arne Schumann
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-33720-9_52

Premium Partner