Skip to main content

2020 | OriginalPaper | Buchkapitel

Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks

verfasst von : Oliver Willers, Sebastian Sudholt, Shervin Raafatnia, Stephanie Abrecht

Erschienen in: Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep learning methods are widely regarded as indispensable when it comes to designing perception pipelines for autonomous agents such as robots, drones or automated vehicles. The main reasons, however, for deep learning not being used for autonomous agents at large scale already are safety concerns. Deep learning approaches typically exhibit a black-box behavior which makes it hard for them to be evaluated with respect to safety-critical aspects. While there have been some work on safety in deep learning, most papers typically focus on high-level safety concerns. In this work, we seek to dive into the safety concerns of deep learning methods on a deeply technical level. Additionally, we present extensive discussions on possible mitigation methods and give an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Please note that while we focus on DNNs, a large amount of the safety concerns discussed in this paper may also be valid for other types of ML-based methods.
 
2
For a concise overview of common threat models see, e.g., [35].
 
3
Defending against adversarial examples is currently a heavily researched topic and there may exist other effective methods.
 
4
Even though synthetic data may look “realistic” to a human, the data-level distribution may be significantly different leading to non-meaningful test results.
 
5
For reasons described in SC-7, the test set used for the ultimate performance evaluation needs to remain unseen until final testing.
 
Literatur
1.
Zurück zum Zitat Adler, R., et al.: Hardening of artificial neural networks for use in safety-critical applications - a mapping study. arXiv (2019) Adler, R., et al.: Hardening of artificial neural networks for use in safety-critical applications - a mapping study. arXiv (2019)
2.
Zurück zum Zitat Alcorn, M.A., et al.: Strike (with) a pose: neural networks are easily fooled by strange poses of familiar objects. arXiv (2018) Alcorn, M.A., et al.: Strike (with) a pose: neural networks are easily fooled by strange poses of familiar objects. arXiv (2018)
3.
Zurück zum Zitat Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight uncertainty in neural networks. In: ICML (2015) Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight uncertainty in neural networks. In: ICML (2015)
5.
Zurück zum Zitat Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial Patch. arXiv (2017) Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial Patch. arXiv (2017)
7.
Zurück zum Zitat Burton, S., Gauerhof, L., Sethy, B.B., Habli, I., Hawkins, R.: Confidence arguments for evidence of performance in machine learning for highly automated driving functions. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 365–377. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_30CrossRef Burton, S., Gauerhof, L., Sethy, B.B., Habli, I., Hawkins, R.: Confidence arguments for evidence of performance in machine learning for highly automated driving functions. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 365–377. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-26250-1_​30CrossRef
8.
Zurück zum Zitat Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (2017) Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (2017)
9.
Zurück zum Zitat Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: ICML (2019) Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: ICML (2019)
10.
Zurück zum Zitat Eykholt, K., et al.: Physical Adversarial Examples for Object Detectors. arXiv (2018) Eykholt, K., et al.: Physical Adversarial Examples for Object Detectors. arXiv (2018)
11.
Zurück zum Zitat Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML (2016) Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML (2016)
13.
Zurück zum Zitat Gharib, M., Lollini, P., Botta, M., Amparore, E., Donatelli, S., Bondavalli, A.: On the safety of automotive systems incorporating machine learning based components: a position paper. In: DSN (2018) Gharib, M., Lollini, P., Botta, M., Amparore, E., Donatelli, S., Bondavalli, A.: On the safety of automotive systems incorporating machine learning based components: a position paper. In: DSN (2018)
14.
Zurück zum Zitat Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015) Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
15.
Zurück zum Zitat Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On Calibration of Modern Neural Networks. arXiv (2017) Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On Calibration of Modern Neural Networks. arXiv (2017)
16.
Zurück zum Zitat Haase-Schütz, C., Hertlein, H., Wiesbeck, W.: Estimating labeling quality with deep object detectors. In: IEEE IV (2019) Haase-Schütz, C., Hertlein, H., Wiesbeck, W.: Estimating labeling quality with deep object detectors. In: IEEE IV (2019)
17.
Zurück zum Zitat Hein, M., Andriushchenko, M., Bitterwolf, J.: Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: CVPR (2019) Hein, M., Andriushchenko, M., Bitterwolf, J.: Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: CVPR (2019)
18.
Zurück zum Zitat Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: ICLR (2019) Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: ICLR (2019)
19.
Zurück zum Zitat ISO: Road vehicles - functional safety (ISO 26262) (2018) ISO: Road vehicles - functional safety (ISO 26262) (2018)
20.
Zurück zum Zitat ISO: Road vehicles - safety of the intended functionality (ISO/PAS 21448) (2019) ISO: Road vehicles - safety of the intended functionality (ISO/PAS 21448) (2019)
21.
Zurück zum Zitat Kletz, T.A.: HAZOP & HAZAN: Notes on the Identification and Assessment of Hazards. Hazard Workshop Modules, Institution of Chemical Engineers (1986) Kletz, T.A.: HAZOP & HAZAN: Notes on the Identification and Assessment of Hazards. Hazard Workshop Modules, Institution of Chemical Engineers (1986)
22.
Zurück zum Zitat Koopman, P., Fratrik, F.: How many operational design domains, objects, and events? In: Workshop on AI Safety (2019) Koopman, P., Fratrik, F.: How many operational design domains, objects, and events? In: Workshop on AI Safety (2019)
23.
Zurück zum Zitat Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Knowledge-Based Intelligent Information and Engineering Systems (2003) Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Knowledge-Based Intelligent Information and Engineering Systems (2003)
24.
Zurück zum Zitat Lampert, C.H., Nickisch, H., Harmeling, S.: Attribute-based classification for zero-shot visual object categorization. In: TPAMI (2014) Lampert, C.H., Nickisch, H., Harmeling, S.: Attribute-based classification for zero-shot visual object categorization. In: TPAMI (2014)
25.
Zurück zum Zitat Lee, M., Kolter, J.Z.: On Physical Adversarial Patches for Object Detection. arXiv (2019) Lee, M., Kolter, J.Z.: On Physical Adversarial Patches for Object Detection. arXiv (2019)
26.
Zurück zum Zitat Li, J., Schmidt, F.R., Kolter, J.Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. arXiv (2019) Li, J., Schmidt, F.R., Kolter, J.Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. arXiv (2019)
27.
Zurück zum Zitat Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
28.
Zurück zum Zitat Morgulis, N., Kreines, A., Mendelowitz, S., Weisglass, Y.: Fooling a Real Car with Adversarial Traffic Signs. arXiv (2019) Morgulis, N., Kreines, A., Mendelowitz, S., Weisglass, Y.: Fooling a Real Car with Adversarial Traffic Signs. arXiv (2019)
29.
Zurück zum Zitat Pakdaman Naeini, M., Cooper, G., Hauskrecht, M.: Obtaining well calibrated probabilities using Bayesian binning. In: AAAI (2015) Pakdaman Naeini, M., Cooper, G., Hauskrecht, M.: Obtaining well calibrated probabilities using Bayesian binning. In: AAAI (2015)
30.
Zurück zum Zitat Schumann, J., Gupta, P., Liu, Y.: Application of neural networks in high assurance systems: a survey. In: Schumann, J., Liu, Y. (eds.) Applications of Neural Networks in High Assurance Systems. Studies in Computational Intelligence, vol. 268, pp. 1–19. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-10690-3_1CrossRef Schumann, J., Gupta, P., Liu, Y.: Application of neural networks in high assurance systems: a survey. In: Schumann, J., Liu, Y. (eds.) Applications of Neural Networks in High Assurance Systems. Studies in Computational Intelligence, vol. 268, pp. 1–19. Springer, Heidelberg (2010). https://​doi.​org/​10.​1007/​978-3-642-10690-3_​1CrossRef
31.
Zurück zum Zitat Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV (2017) Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV (2017)
32.
Zurück zum Zitat Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014) Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
33.
Zurück zum Zitat Varshney, K.R.: Engineering safety in machine learning. In: Information Theory and Applications Workshop (2016) Varshney, K.R.: Engineering safety in machine learning. In: Information Theory and Applications Workshop (2016)
34.
Zurück zum Zitat Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: ICML (2018) Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: ICML (2018)
35.
Zurück zum Zitat Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. In: TNNLS (2019) Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. In: TNNLS (2019)
36.
Zurück zum Zitat Zendel, O., Murschitz, M., Humenberger, M., Herzner, W.: CV-HAZOP: introducing test data validation for computer vision. In: ICCV (2015) Zendel, O., Murschitz, M., Humenberger, M., Herzner, W.: CV-HAZOP: introducing test data validation for computer vision. In: ICCV (2015)
38.
Zurück zum Zitat Zhang, J.M., Harman, M., Ma, L., Liu, Y.: Machine Learning Testing: Survey, Landscapes and Horizons. arXiv (2019) Zhang, J.M., Harman, M., Ma, L., Liu, Y.: Machine Learning Testing: Survey, Landscapes and Horizons. arXiv (2019)
Metadaten
Titel
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
verfasst von
Oliver Willers
Sebastian Sudholt
Shervin Raafatnia
Stephanie Abrecht
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-55583-2_25

Premium Partner