Skip to main content

2024 | OriginalPaper | Buchkapitel

DigiWeather: Synthetic Rain, Snow and Fog Dataset Augmentation

verfasst von : Ivan Nikolov

Erschienen in: Extended Reality

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Ensuring the resilience of deep learning algorithms to data changes, especially in outdoor scenarios with dynamic weather conditions, poses challenges due to limited training data captured in short periods. Weather variations like rain, snow, and fog can introduce concept drift, significantly impacting model accuracy. Expanding datasets with diverse temporal variations is often impractical due to time and cost constraints. Alternatively, we propose an easily deployable and scalable approach to augment weather changes, leveraging the Unity game engine for synthetic image generation. Our method swiftly produces large amounts of augmented videos and images, requires off-the-shelf models only for pre-processing, and allows flexible combinations of effects to simulate various weather conditions. We introduce Weathervenue, an augmented subset of the CUHK Avenue dataset, and employ it in testing four anomaly detection models and models for object detection, semantic segmentation, and depth estimation. Results demonstrate performance degradation ranging from 10% to 35% across all anomaly detectors and visibly worse results for other methods, underscoring the necessity of our solution for creating more challenging scenarios and training robust models. We also show that training on a combination of real and augmented data can boost performance on rain, snow, and fog testing data by up to 10%, while only minimally affecting clear results. Link to the code and augmented dataset https://​github.​com/​IvanNik17/​DigiWeather.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Acsintoae, A., et al.: UBnormal: new benchmark for supervised open-set video anomaly detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20143–20153 (2022) Acsintoae, A., et al.: UBnormal: new benchmark for supervised open-set video anomaly detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20143–20153 (2022)
2.
Zurück zum Zitat Almeida, P.R., Oliveira, L.S., Britto Jr., A.S., Sabourin, R.: Adapting dynamic classifier selection for concept drift. Expert Syst. Appl. 104, 67–85 (2018) Almeida, P.R., Oliveira, L.S., Britto Jr., A.S., Sabourin, R.: Adapting dynamic classifier selection for concept drift. Expert Syst. Appl. 104, 67–85 (2018)
3.
Zurück zum Zitat Annadani, Y., Jawahar, C.: Augment and adapt: a simple approach to image tampering detection. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2983–2988. IEEE (2018) Annadani, Y., Jawahar, C.: Augment and adapt: a simple approach to image tampering detection. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2983–2988. IEEE (2018)
4.
5.
Zurück zum Zitat Bahnsen, C.H., Moeslund, T.B.: Rain removal in traffic surveillance: does it matter? IEEE Trans. Intell. Transp. Syst. 20(8), 2802–2819 (2018)CrossRef Bahnsen, C.H., Moeslund, T.B.: Rain removal in traffic surveillance: does it matter? IEEE Trans. Intell. Transp. Syst. 20(8), 2802–2819 (2018)CrossRef
6.
Zurück zum Zitat Bahnsen, C.H., Vázquez, D., López, A.M., Moeslund, T.B.: Learning to remove rain in traffic surveillance by using synthetic data. In: 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), pp. 123–130. SCITEPRESS Digital Library (2019) Bahnsen, C.H., Vázquez, D., López, A.M., Moeslund, T.B.: Learning to remove rain in traffic surveillance by using synthetic data. In: 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019), pp. 123–130. SCITEPRESS Digital Library (2019)
7.
Zurück zum Zitat Barnum, P.C., Narasimhan, S., Kanade, T.: Analysis of rain and snow in frequency space. Int. J. Comput. Vision 86, 256–274 (2010)CrossRef Barnum, P.C., Narasimhan, S., Kanade, T.: Analysis of rain and snow in frequency space. Int. J. Comput. Vision 86, 256–274 (2010)CrossRef
8.
Zurück zum Zitat Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 41–48 (2009) Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 41–48 (2009)
9.
Zurück zum Zitat Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: YOLACT++: better real-time instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 44(2), 1108–1121 (2020)CrossRef Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: YOLACT++: better real-time instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 44(2), 1108–1121 (2020)CrossRef
10.
11.
Zurück zum Zitat Boone, J., Hopkins, B., Afghah, F.: Attention-guided synthetic data augmentation for drone-based wildfire detection. In: IEEE INFOCOM 2023-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 1–6. IEEE (2023) Boone, J., Hopkins, B., Afghah, F.: Attention-guided synthetic data augmentation for drone-based wildfire detection. In: IEEE INFOCOM 2023-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 1–6. IEEE (2023)
12.
Zurück zum Zitat Borji, A.: Generated faces in the wild: quantitative comparison of stable diffusion, midjourney and DALL-E 2. arXiv preprint arXiv:2210.00586 (2022) Borji, A.: Generated faces in the wild: quantitative comparison of stable diffusion, midjourney and DALL-E 2. arXiv preprint arXiv:​2210.​00586 (2022)
13.
Zurück zum Zitat Buslaev, A., Parinov, A., Khvedchenya, E., Iglovikov, V.I., Kalinin, A.A.: Albumentations: fast and flexible image augmentations. ArXiv e-prints (2018) Buslaev, A., Parinov, A., Khvedchenya, E., Iglovikov, V.I., Kalinin, A.A.: Albumentations: fast and flexible image augmentations. ArXiv e-prints (2018)
14.
Zurück zum Zitat Cheng, B., Li, J., Chen, Y., Zeng, T.: Snow mask guided adaptive residual network for image snow removal. Comput. Vis. Image Underst. 236, 103819 (2023)CrossRef Cheng, B., Li, J., Chen, Y., Zeng, T.: Snow mask guided adaptive residual network for image snow removal. Comput. Vis. Image Underst. 236, 103819 (2023)CrossRef
15.
Zurück zum Zitat Dwibedi, D., Misra, I., Hebert, M.: Cut, paste and learn: surprisingly easy synthesis for instance detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1301–1310 (2017) Dwibedi, D., Misra, I., Hebert, M.: Cut, paste and learn: surprisingly easy synthesis for instance detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1301–1310 (2017)
16.
Zurück zum Zitat Ebadi, S.E., et al.: PeopleSansPeople: a synthetic data generator for human-centric computer vision. arXiv preprint arXiv:2112.09290 (2021) Ebadi, S.E., et al.: PeopleSansPeople: a synthetic data generator for human-centric computer vision. arXiv preprint arXiv:​2112.​09290 (2021)
19.
Zurück zum Zitat Garg, K., Nayar, S.K.: When does a camera see rain? In: Tenth IEEE International Conference on Computer Vision (ICCV 2005) Volume 1, vol. 2, pp. 1067–1074. IEEE (2005) Garg, K., Nayar, S.K.: When does a camera see rain? In: Tenth IEEE International Conference on Computer Vision (ICCV 2005) Volume 1, vol. 2, pp. 1067–1074. IEEE (2005)
20.
Zurück zum Zitat Greff, K., et al.: Kubric: a scalable dataset generator. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3749–3761 (2022) Greff, K., et al.: Kubric: a scalable dataset generator. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3749–3761 (2022)
21.
Zurück zum Zitat Hahner, M., Dai, D., Sakaridis, C., Zaech, J.N., Van Gool, L.: Semantic understanding of foggy scenes with purely synthetic data. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 3675–3681. IEEE (2019) Hahner, M., Dai, D., Sakaridis, C., Zaech, J.N., Van Gool, L.: Semantic understanding of foggy scenes with purely synthetic data. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 3675–3681. IEEE (2019)
22.
Zurück zum Zitat Halder, S.S., Lalonde, J.F., de Charette, R.: Physics-based rendering for improving robustness to rain. In: ICCV (2019) Halder, S.S., Lalonde, J.F., de Charette, R.: Physics-based rendering for improving robustness to rain. In: ICCV (2019)
23.
Zurück zum Zitat Hastings, E.J., Guha, R.K., Stanley, K.O.: Interactive evolution of particle systems for computer graphics and animation. IEEE Trans. Evol. Comput. 13(2), 418–432 (2008)CrossRef Hastings, E.J., Guha, R.K., Stanley, K.O.: Interactive evolution of particle systems for computer graphics and animation. IEEE Trans. Evol. Comput. 13(2), 418–432 (2008)CrossRef
24.
27.
Zurück zum Zitat Krähenbühl, P.: Free supervision from video games. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2955–2964 (2018) Krähenbühl, P.: Free supervision from video games. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2955–2964 (2018)
28.
Zurück zum Zitat Li, K., Li, Y., You, S., Barnes, N.: Photo-realistic simulation of road scene for data-driven methods in bad weather. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 491–500 (2017) Li, K., Li, Y., You, S., Barnes, N.: Photo-realistic simulation of road scene for data-driven methods in bad weather. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 491–500 (2017)
29.
Zurück zum Zitat Li, M., Cao, X., Zhao, Q., Zhang, L., Meng, D.: Online rain/snow removal from surveillance videos. IEEE Trans. Image Process. 30, 2029–2044 (2021)CrossRef Li, M., Cao, X., Zhao, Q., Zhang, L., Meng, D.: Online rain/snow removal from surveillance videos. IEEE Trans. Image Process. 30, 2029–2044 (2021)CrossRef
30.
Zurück zum Zitat Liu, W., Luo, W., Lian, D., Gao, S.: Future frame prediction for anomaly detection–a new baseline. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6536–6545 (2018) Liu, W., Luo, W., Lian, D., Gao, S.: Future frame prediction for anomaly detection–a new baseline. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6536–6545 (2018)
31.
Zurück zum Zitat Liu, Y.F., Jaw, D.W., Huang, S.C., Hwang, J.N.: DesnowNet: context-aware deep network for snow removal. IEEE Trans. Image Process. 27(6), 3064–3073 (2018)MathSciNetCrossRef Liu, Y.F., Jaw, D.W., Huang, S.C., Hwang, J.N.: DesnowNet: context-aware deep network for snow removal. IEEE Trans. Image Process. 27(6), 3064–3073 (2018)MathSciNetCrossRef
32.
Zurück zum Zitat Lu, C., Shi, J., Jia, J.: Abnormal event detection at 150 FPS in MATLAB. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2720–2727 (2013) Lu, C., Shi, J., Jia, J.: Abnormal event detection at 150 FPS in MATLAB. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2720–2727 (2013)
33.
Zurück zum Zitat Lv, H., Chen, C., Cui, Z., Xu, C., Li, Y., Yang, J.: Learning normal dynamics in videos with meta prototype network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15425–15434 (2021) Lv, H., Chen, C., Cui, Z., Xu, C., Li, Y., Yang, J.: Learning normal dynamics in videos with meta prototype network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15425–15434 (2021)
34.
Zurück zum Zitat Madan, N., et al.: ThermalSynth: a novel approach for generating synthetic thermal human scenarios. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 130–139 (2023) Madan, N., et al.: ThermalSynth: a novel approach for generating synthetic thermal human scenarios. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 130–139 (2023)
35.
Zurück zum Zitat Matsui, T., Ikehara, M.: Gan-based rain noise removal from single-image considering rain composite models. IEEE Access 8, 40892–40900 (2020)CrossRef Matsui, T., Ikehara, M.: Gan-based rain noise removal from single-image considering rain composite models. IEEE Access 8, 40892–40900 (2020)CrossRef
38.
Zurück zum Zitat Nikolov, I.A., et al.: Seasons in drift: a long-term thermal imaging dataset for studying concept drift. In: Thirty-Fifth Conference on Neural Information Processing Systems. Neural Information Processing Systems Foundation (2021) Nikolov, I.A., et al.: Seasons in drift: a long-term thermal imaging dataset for studying concept drift. In: Thirty-Fifth Conference on Neural Information Processing Systems. Neural Information Processing Systems Foundation (2021)
39.
Zurück zum Zitat Park, H., Noh, J., Ham, B.: Learning memory-guided normality for anomaly detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14372–14381 (2020) Park, H., Noh, J., Ham, B.: Learning memory-guided normality for anomaly detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14372–14381 (2020)
40.
Zurück zum Zitat Pervaiz, M., Shorfuzzaman, M., Alsufyani, A., Jalal, A., Alsuhibany, S.A., Park, J.: Tracking and analysis of pedestrian’s behavior in public places. Comput. Mater. Continua 75(1), 841–853 (2023) Pervaiz, M., Shorfuzzaman, M., Alsufyani, A., Jalal, A., Alsuhibany, S.A., Park, J.: Tracking and analysis of pedestrian’s behavior in public places. Comput. Mater. Continua 75(1), 841–853 (2023)
41.
Zurück zum Zitat Ranftl, R., Lasinger, K., Hafner, D., Schindler, K., Koltun, V.: Towards robust monocular depth estimation: mixing datasets for zero-shot cross-dataset transfer. IEEE Trans. Pattern Anal. Mach. Intell. 44(3), 1623–1637 (2020) Ranftl, R., Lasinger, K., Hafner, D., Schindler, K., Koltun, V.: Towards robust monocular depth estimation: mixing datasets for zero-shot cross-dataset transfer. IEEE Trans. Pattern Anal. Mach. Intell. 44(3), 1623–1637 (2020)
44.
Zurück zum Zitat Shin, H.C., Lee, K.I., Lee, C.E.: Data augmentation method of object detection for deep learning in maritime image. In: 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 463–466. IEEE (2020) Shin, H.C., Lee, K.I., Lee, C.E.: Data augmentation method of object detection for deep learning in maritime image. In: 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 463–466. IEEE (2020)
45.
Zurück zum Zitat Sudhakar, S., Hanzelka, J., Bobillot, J., Randhavane, T., Joshi, N., Vineet, V.: Exploring the Sim2Real gap using digital twins. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 20418–20427 (2023) Sudhakar, S., Hanzelka, J., Bobillot, J., Randhavane, T., Joshi, N., Vineet, V.: Exploring the Sim2Real gap using digital twins. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 20418–20427 (2023)
46.
Zurück zum Zitat Suresha, M., Kuppa, S., Raghukumar, D.: PointRend segmentation for a densely occluded moving object in a video. In: 2021 Fourth International Conference on Computational Intelligence and Communication Technologies (CCICT), pp. 282–287. IEEE (2021) Suresha, M., Kuppa, S., Raghukumar, D.: PointRend segmentation for a densely occluded moving object in a video. In: 2021 Fourth International Conference on Computational Intelligence and Communication Technologies (CCICT), pp. 282–287. IEEE (2021)
47.
Zurück zum Zitat Szeliski, R., Tonnesen, D.: Surface modeling with oriented particle systems. In: Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, pp. 185–194 (1992) Szeliski, R., Tonnesen, D.: Surface modeling with oriented particle systems. In: Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, pp. 185–194 (1992)
48.
Zurück zum Zitat Tremblay, J., et al.: Training deep networks with synthetic data: bridging the reality gap by domain randomization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 969–977 (2018) Tremblay, J., et al.: Training deep networks with synthetic data: bridging the reality gap by domain randomization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 969–977 (2018)
49.
Zurück zum Zitat Tremblay, M., Halder, S.S., De Charette, R., Lalonde, J.F.: Rain rendering for evaluating and improving robustness to bad weather. Int. J. Comput. Vision 129, 341–360 (2021)CrossRef Tremblay, M., Halder, S.S., De Charette, R., Lalonde, J.F.: Rain rendering for evaluating and improving robustness to bad weather. Int. J. Comput. Vision 129, 341–360 (2021)CrossRef
51.
Zurück zum Zitat Von Bernuth, A., Volk, G., Bringmann, O.: Simulating photo-realistic snow and fog on existing images for enhanced CNN training and evaluation. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 41–46. IEEE (2019) Von Bernuth, A., Volk, G., Bringmann, O.: Simulating photo-realistic snow and fog on existing images for enhanced CNN training and evaluation. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 41–46. IEEE (2019)
52.
Zurück zum Zitat Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023) Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
53.
Zurück zum Zitat Wang, Q., Gao, J., Lin, W., Yuan, Y.: Learning from synthetic data for crowd counting in the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8198–8207 (2019) Wang, Q., Gao, J., Lin, W., Yuan, Y.: Learning from synthetic data for crowd counting in the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8198–8207 (2019)
54.
Zurück zum Zitat Weber, Y., Jolivet, V., Gilet, G., Ghazanfarpour, D.: A multiscale model for rain rendering in real-time. Comput. Graph. 50, 61–70 (2015)CrossRef Weber, Y., Jolivet, V., Gilet, G., Ghazanfarpour, D.: A multiscale model for rain rendering in real-time. Comput. Graph. 50, 61–70 (2015)CrossRef
55.
Zurück zum Zitat Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1357–1366 (2017) Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1357–1366 (2017)
56.
Zurück zum Zitat Zhao, M., Liu, Y., Liu, J., Li, D., Zeng, X.: LGN-Net: local-global normality network for video anomaly detection. arXiv preprint arXiv:2211.07454 (2022) Zhao, M., Liu, Y., Liu, J., Li, D., Zeng, X.: LGN-Net: local-global normality network for video anomaly detection. arXiv preprint arXiv:​2211.​07454 (2022)
57.
Zurück zum Zitat Zherdeva, L., Minaev, E., Zherdev, D., Fursov, V.: Synthetic dataset for navigation tasks of autonomous systems and ground robots. In: 2021 International Conference on Information Technology and Nanotechnology (ITNT), pp. 1–4. IEEE (2021) Zherdeva, L., Minaev, E., Zherdev, D., Fursov, V.: Synthetic dataset for navigation tasks of autonomous systems and ground robots. In: 2021 International Conference on Information Technology and Nanotechnology (ITNT), pp. 1–4. IEEE (2021)
Metadaten
Titel
DigiWeather: Synthetic Rain, Snow and Fog Dataset Augmentation
verfasst von
Ivan Nikolov
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-71707-9_2