Skip to main content

2020 | OriginalPaper | Buchkapitel

Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications

verfasst von : Gesina Schwalbe, Bernhard Knie, Timo Sämann, Timo Dobberphul, Lydia Gauerhof, Shervin Raafatnia, Vittorio Rocco

Erschienen in: Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep neural networks (DNNs) are widely considered as a key technology for perception in high and full driving automation. However, their safety assessment remains challenging, as they exhibit specific insufficiencies: black-box nature, simple performance issues, incorrect internal logic, and instability. These are not sufficiently considered in existing standards on safety argumentation. In this paper, we systematically establish and break down safety requirements to argue the sufficient absence of risk arising from such insufficiencies. We furthermore argue why diverse evidence is highly relevant for a safety argument involving DNNs, and classify available sources of evidence. Together, this yields a generic approach and template to thoroughly respect DNN specifics within a safety argumentation structure. Its applicability is shown by providing examples of methods and measures following an example use case based on pedestrian detection.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Assion, F., et al.: The attack generator: a systematic approach towards constructing adversarial attacks. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019) Assion, F., et al.: The attack generator: a systematic approach towards constructing adversarial attacks. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)
3.
Zurück zum Zitat Burton, S., Gauerhof, L., Sethy, B.B., Habli, I., Hawkins, R.: Confidence arguments for evidence of performance in machine learning for highly automated driving functions. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 365–377. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_30CrossRef Burton, S., Gauerhof, L., Sethy, B.B., Habli, I., Hawkins, R.: Confidence arguments for evidence of performance in machine learning for highly automated driving functions. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 365–377. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-26250-1_​30CrossRef
4.
Zurück zum Zitat Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec 2017, pp. 3–14. Association for Computing Machinery (2017). https://doi.org/10.1145/3128572.3140444 Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec 2017, pp. 3–14. Association for Computing Machinery (2017). https://​doi.​org/​10.​1145/​3128572.​3140444
5.
Zurück zum Zitat Cluzeau, J.M., Henriquel, X., Rebender, G., et al.: Concepts of design assurance for neural networks. Technical report, European Union Aviation Safety Agency (EASA) (2020) Cluzeau, J.M., Henriquel, X., Rebender, G., et al.: Concepts of design assurance for neural networks. Technical report, European Union Aviation Safety Agency (EASA) (2020)
7.
Zurück zum Zitat Gauerhof, L., Gu, N.: Reverse variational autoencoder for visual attribute manipulation and anomaly detection. In: Winter Application Conference on Applications of Computer Vision (2020) Gauerhof, L., Gu, N.: Reverse variational autoencoder for visual attribute manipulation and anomaly detection. In: Winter Application Conference on Applications of Computer Vision (2020)
9.
Zurück zum Zitat Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In: Proceedings of the 7th International Conference on Learning Representations (2018) Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In: Proceedings of the 7th International Conference on Learning Representations (2018)
10.
Zurück zum Zitat Hailesilassie, T.: Rule extraction algorithm for deep neural networks: a review. CoRR abs/1610.05267 (2016) Hailesilassie, T.: Rule extraction algorithm for deep neural networks: a review. CoRR abs/1610.05267 (2016)
11.
Zurück zum Zitat Henne, M., Schwaiger, A., Roscher, K., Weiss, G.: Benchmarking uncertainty estimation methods for deep learning with safety-related metrics. In: Proceedings of the Workshop on Artificial Intelligence Safety, vol. 2560, pp. 83–90. CEUR-WS.org (2020) Henne, M., Schwaiger, A., Roscher, K., Weiss, G.: Benchmarking uncertainty estimation methods for deep learning with safety-related metrics. In: Proceedings of the Workshop on Artificial Intelligence Safety, vol. 2560, pp. 83–90. CEUR-WS.org (2020)
12.
Zurück zum Zitat ISO/IEC JTC 1/SC 7: ISO/IEC/IEEE 12207:2017: Systems and Software Engineering—Software Life Cycle Processes, 1 edn. (2017) ISO/IEC JTC 1/SC 7: ISO/IEC/IEEE 12207:2017: Systems and Software Engineering—Software Life Cycle Processes, 1 edn. (2017)
13.
Zurück zum Zitat ISO/TC 22/SC 32: ISO 26262–1:2018(En): Road Vehicles—Functional Safety—Part 1: Vocabulary, ISO 26262:2018(En), vol. 1. 2 edn. (2018) ISO/TC 22/SC 32: ISO 26262–1:2018(En): Road Vehicles—Functional Safety—Part 1: Vocabulary, ISO 26262:2018(En), vol. 1. 2 edn. (2018)
14.
Zurück zum Zitat ISO/TC 22/SC 32: ISO 26262–4:2018(En): Road Vehicles—Functional Safety—Part 4: Product Development at the System Level, ISO 26262:2018(En), vol. 4. 2 edn. (2018) ISO/TC 22/SC 32: ISO 26262–4:2018(En): Road Vehicles—Functional Safety—Part 4: Product Development at the System Level, ISO 26262:2018(En), vol. 4. 2 edn. (2018)
15.
Zurück zum Zitat ISO/TC 22/SC 32: ISO/PAS 21448:2019(En): Road Vehicles—Safety of the Intended Functionality (2019) ISO/TC 22/SC 32: ISO/PAS 21448:2019(En): Road Vehicles—Safety of the Intended Functionality (2019)
17.
Zurück zum Zitat Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Advances in Neural Information Processing Systems, vol. 30, pp. 5580–5590 (2017) Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Advances in Neural Information Processing Systems, vol. 30, pp. 5580–5590 (2017)
18.
Zurück zum Zitat Leveson, N.: Engineering a Safer World: Systems Thinking Applied to Safety. Engineering Systems. MIT Press, Cambridge (2012)CrossRef Leveson, N.: Engineering a Safer World: Systems Thinking Applied to Safety. Engineering Systems. MIT Press, Cambridge (2012)CrossRef
19.
Zurück zum Zitat Liang, S., Li, Y., Srikant, R.: Principled detection of out-of-distribution examples in neural networks. CoRR abs/1706.02690 (2017) Liang, S., Li, Y., Srikant, R.: Principled detection of out-of-distribution examples in neural networks. CoRR abs/1706.02690 (2017)
21.
Zurück zum Zitat Salay, R., Queiroz, R., Czarnecki, K.: An analysis of ISO 26262: using machine learning safely in automotive software. CoRR abs/1709.02435 (2017) Salay, R., Queiroz, R., Czarnecki, K.: An analysis of ISO 26262: using machine learning safely in automotive software. CoRR abs/1709.02435 (2017)
22.
Zurück zum Zitat Sämann, T., Schlicht, P., Hüger, F.: Strategy to increase the safety of a dnn-based perception for HAD systems. CoRR abs/2002.08935 (2020) Sämann, T., Schlicht, P., Hüger, F.: Strategy to increase the safety of a dnn-based perception for HAD systems. CoRR abs/2002.08935 (2020)
24.
Zurück zum Zitat Schwalbe, G., Schels, M.: Strategies for safety goal decomposition for neural networks. In: Abstracts 3rd ACM Computer Science in Cars Symposium (2019) Schwalbe, G., Schels, M.: Strategies for safety goal decomposition for neural networks. In: Abstracts 3rd ACM Computer Science in Cars Symposium (2019)
25.
Zurück zum Zitat Schwalbe, G., Schels, M.: A survey on methods for the safety assurance of machine learning based systems. In: Proceedings of the 10th European Congress on Embedded Real Time Systems (2020) Schwalbe, G., Schels, M.: A survey on methods for the safety assurance of machine learning based systems. In: Proceedings of the 10th European Congress on Embedded Real Time Systems (2020)
27.
Zurück zum Zitat Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., Kroening, D.: Concolic testing for deep neural networks. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pp. 109–119. ACM (2018). https://doi.org/10.1145/3238147.3238172 Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., Kroening, D.: Concolic testing for deep neural networks. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pp. 109–119. ACM (2018). https://​doi.​org/​10.​1145/​3238147.​3238172
28.
Zurück zum Zitat Underwriters Laboratories, Edge Case Research: UL4600: Standard for Safety of Autonomous Products. Edge Case Research (2019) Underwriters Laboratories, Edge Case Research: UL4600: Standard for Safety of Autonomous Products. Edge Case Research (2019)
29.
Zurück zum Zitat Voget, S., Rudolph, A., Mottok, J.: A consistent safety case argumentation for artificial intelligence in safety related automotive systems. In: Proceedings of the 9th European Congress Embedded Real Time Systems (2018) Voget, S., Rudolph, A., Mottok, J.: A consistent safety case argumentation for artificial intelligence in safety related automotive systems. In: Proceedings of the 9th European Congress Embedded Real Time Systems (2018)
30.
Zurück zum Zitat Willers, O., Sudholt, S., Raafatnia, S., Stephanie, A.: Safety concerns and mitigation approaches regarding the use of deep learning in safety-critical perception tasks. CoRR abs/2001.08001 (2020) Willers, O., Sudholt, S., Raafatnia, S., Stephanie, A.: Safety concerns and mitigation approaches regarding the use of deep learning in safety-critical perception tasks. CoRR abs/2001.08001 (2020)
Metadaten
Titel
Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications
verfasst von
Gesina Schwalbe
Bernhard Knie
Timo Sämann
Timo Dobberphul
Lydia Gauerhof
Shervin Raafatnia
Vittorio Rocco
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-55583-2_29