Skip to main content

2023 | OriginalPaper | Buchkapitel

Game Engine-Based Synthetic Dataset Generation of Entities on Construction Site

verfasst von : Shenghan Li, Yaolin Zhang, Yi Tan

Erschienen in: Proceedings of the 27th International Symposium on Advancement of Construction Management and Real Estate

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Computer vision has been widely used in construction sites for progress monitoring, and safety monitoring. However, collecting data from construction sites and labeling them into datasets is a time-consuming, labor-intensive, and costly task. Therefore, a synthetic dataset generation approach for construction site entities based on the game engine is proposed to solve the problem of the lack of construction site datasets. In this research, construction site scene models are formulated by grouping existing digital on-site assets, and image annotation and camera calibration files are automatically generated by developed scripts in the selected game engine. The movement of the model is also controlled by developed scripts and the scene is rendered using High-Definition Rendering Pipeline (HDRP) to obtain high-resolution images. Components such as transform and Box Collider are used to get the coordinates of the object relative to the camera and the size of the bounding box, and to automatically generate the labels. In addition, the focal length, field of view (FOV), and other parameters of the camera component are utilized to calculate the camera Intrinsic when generating calibration files. By this method, a large amount of synthetic data can be quickly acquired and labeled, significantly reducing the time of dataset generation of on-site entities. Finally, the computer vision model trained on the synthetic dataset achieved 91.6% mAP on the real dataset.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Lu, T., Huyen, A., Nguyen, L., Osborne, J., Eldin, S., Yun, K.: Optimized training of deep neural network for image analysis using synthetic objects and augmented reality. Pattern Recogn. Track. XXX 10995, 106–116 (2019) Lu, T., Huyen, A., Nguyen, L., Osborne, J., Eldin, S., Yun, K.: Optimized training of deep neural network for image analysis using synthetic objects and augmented reality. Pattern Recogn. Track. XXX 10995, 106–116 (2019)
3.
Zurück zum Zitat Kolar, Z., Chen, H., Luo, X.: Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images. Autom. Constr. 89, 58–70 (2018)CrossRef Kolar, Z., Chen, H., Luo, X.: Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images. Autom. Constr. 89, 58–70 (2018)CrossRef
4.
Zurück zum Zitat Zheng, Z., Zhang, Z., Pan, W.: Virtual prototyping-and transfer learning-enabled module detection for modular integrated construction. Autom. Constr. 120, 103387 (2020)CrossRef Zheng, Z., Zhang, Z., Pan, W.: Virtual prototyping-and transfer learning-enabled module detection for modular integrated construction. Autom. Constr. 120, 103387 (2020)CrossRef
5.
Zurück zum Zitat Soltani, M.M., Zhu, Z., Hammad, A.: Automated annotation for visual recognition of construction resources using synthetic images. Autom. Constr. 62, 14–23 (2016)CrossRef Soltani, M.M., Zhu, Z., Hammad, A.: Automated annotation for visual recognition of construction resources using synthetic images. Autom. Constr. 62, 14–23 (2016)CrossRef
6.
Zurück zum Zitat Calderon, W.T., Roberts, D., Golparvar-Fard, M.: Synthesizing pose sequences from 3D assets for vision-based activity analysis. J. Comput. Civ. Eng. 35(1), 04020052 (2021) Calderon, W.T., Roberts, D., Golparvar-Fard, M.: Synthesizing pose sequences from 3D assets for vision-based activity analysis. J. Comput. Civ. Eng. 35(1), 04020052 (2021)
7.
Zurück zum Zitat Wang, D., et al.: Vision-based productivity analysis of cable crane transportation using augmented reality-based synthetic image. J. Comput. Civ. Eng. 36(1), 04021030 (2022)CrossRef Wang, D., et al.: Vision-based productivity analysis of cable crane transportation using augmented reality-based synthetic image. J. Comput. Civ. Eng. 36(1), 04021030 (2022)CrossRef
8.
Zurück zum Zitat Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. IEEE Conf. Comput. Vis. Pattern Recogn. 2012, 3354–3361 (2012) Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. IEEE Conf. Comput. Vis. Pattern Recogn. 2012, 3354–3361 (2012)
9.
Zurück zum Zitat Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q., Yang, R.: The apolloscape open dataset for autonomous driving and its application. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2702–2719 (2019)CrossRef Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q., Yang, R.: The apolloscape open dataset for autonomous driving and its application. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2702–2719 (2019)CrossRef
10.
Zurück zum Zitat Gählert, N., Jourdan, N., Cordts, M., Franke, U., Denzler, J.: Cityscapes 3D: dataset and benchmark for 9 DoF vehicle detection, arXiv preprint arXiv:2006.07864 (2020) Gählert, N., Jourdan, N., Cordts, M., Franke, U., Denzler, J.: Cityscapes 3D: dataset and benchmark for 9 DoF vehicle detection, arXiv preprint arXiv:​2006.​07864 (2020)
11.
Zurück zum Zitat Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020) Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)
12.
Zurück zum Zitat Ye, X., et al.: Rope3D: the roadside perception dataset for autonomous driving and monocular 3D object detection task. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21341–21350 (2022) Ye, X., et al.: Rope3D: the roadside perception dataset for autonomous driving and monocular 3D object detection task. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21341–21350 (2022)
13.
Zurück zum Zitat Weng, X., et al.: All-in-one drive: a large-scale comprehensive perception dataset with high-density long-range point clouds (2020) Weng, X., et al.: All-in-one drive: a large-scale comprehensive perception dataset with high-density long-range point clouds (2020)
14.
Zurück zum Zitat Sun, T., et al.: SHIFT: a synthetic driving dataset for continuous multi-task domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21371–21382 (2022) Sun, T., et al.: SHIFT: a synthetic driving dataset for continuous multi-task domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21371–21382 (2022)
Metadaten
Titel
Game Engine-Based Synthetic Dataset Generation of Entities on Construction Site
verfasst von
Shenghan Li
Yaolin Zhang
Yi Tan
Copyright-Jahr
2023
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-3626-7_123