Skip to main content

2023 | OriginalPaper | Buchkapitel

Safety and Robustness for Deep Neural Networks: An Automotive Use Case

verfasst von : Davide Bacciu, Antonio Carta, Claudio Gallicchio, Christoph Schmittner

Erschienen in: Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Current automotive safety standards are cautious when it comes to utilizing deep neural networks in safety-critical scenarios due to concerns regarding robustness to noise, domain drift, and uncertainty quantification. In this paper, we propose a scenario where a neural network adjusts the automated driving style to reduce user stress. In this scenario, only certain actions are safety-critical, allowing for greater control over the model’s behavior. To demonstrate how safety can be addressed, we propose a mechanism based on robustness quantification and a fallback plan. This approach enables the model to minimize user stress in safe conditions while avoiding unsafe actions in uncertain scenarios. By exploring this use case, we hope to inspire discussions around identifying safety-critical scenarios and approaches where neural networks can be safely utilized. We see this also as a potential contribution to the development of new standards and best practices for the usage of AI in safety-critical scenarios. The work done here is a result of the TEACHING project, an European research project around the safe, secure and trustworthy usage of AI.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
5.
Zurück zum Zitat Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J.C., Liang, P.S.: Unlabeled data improves adversarial robustness. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019) Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J.C., Liang, P.S.: Unlabeled data improves adversarial robustness. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
6.
Zurück zum Zitat Du, T., et al.: Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks, South Korea, p. 19 (2021) Du, T., et al.: Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks, South Korea, p. 19 (2021)
7.
Zurück zum Zitat Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)CrossRef Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)CrossRef
8.
Zurück zum Zitat Fan, L., Liu, S., Chen, P.Y., Zhang, G., Gan, C.: When does contrastive learning preserve adversarial robustness from pretraining to finetuning? In: Advances in Neural Information Processing Systems, vol. 34, pp. 21480–21492. Curran Associates, Inc. (2021) Fan, L., Liu, S., Chen, P.Y., Zhang, G., Gan, C.: When does contrastive learning preserve adversarial robustness from pretraining to finetuning? In: Advances in Neural Information Processing Systems, vol. 34, pp. 21480–21492. Curran Associates, Inc. (2021)
9.
Zurück zum Zitat Fawzi, A., Fawzi, O., Frossard, P.: Fundamental limits on adversarial robustness Fawzi, A., Fawzi, O., Frossard, P.: Fundamental limits on adversarial robustness
12.
Zurück zum Zitat Ko, C.Y., Lyu, Z., Weng, L., Daniel, L., Wong, N., Lin, D.: POPQORN: quantifying robustness of recurrent neural networks. In: Proceedings of the 36th International Conference on Machine Learning, pp. 3468–3477. PMLR (2019) Ko, C.Y., Lyu, Z., Weng, L., Daniel, L., Wong, N., Lin, D.: POPQORN: quantifying robustness of recurrent neural networks. In: Proceedings of the 36th International Conference on Machine Learning, pp. 3468–3477. PMLR (2019)
13.
Zurück zum Zitat Li, J., Schmidt, F.R., Kolter, J.Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. arXiv:1904.00759 [cs, stat] (2019) Li, J., Schmidt, F.R., Kolter, J.Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. arXiv:​1904.​00759 [cs, stat] (2019)
14.
Zurück zum Zitat Qin, C., et al.: Adversarial robustness through local linearization. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019) Qin, C., et al.: Adversarial robustness through local linearization. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
15.
Zurück zum Zitat Stutz, D., Hein, M., Schiele, B.: Disentangling adversarial robustness and generalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6976–6987 (2019) Stutz, D., Hein, M., Schiele, B.: Disentangling adversarial robustness and generalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6976–6987 (2019)
Metadaten
Titel
Safety and Robustness for Deep Neural Networks: An Automotive Use Case
verfasst von
Davide Bacciu
Antonio Carta
Claudio Gallicchio
Christoph Schmittner
Copyright-Jahr
2023
DOI
https://doi.org/10.1007/978-3-031-40953-0_9

Premium Partner