Skip to main content
Erschienen in: AI & SOCIETY 2/2024

27.06.2022 | Open Forum

Two-stage approach to solve ethical morality problem in self-driving cars

verfasst von: Akshat Chandak, Shailendra Aote, Aradhita Menghal, Urvi Negi, Shreyas Nemani, Shubham Jha

Erschienen in: AI & SOCIETY | Ausgabe 2/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Ethical morality is one of the significant issues in self-driving cars. The paper provides a newer approach to solve the ethical decision problems in self-driving cars until there is no concrete ethical decision to all problems. This paper gives a two-way approach to solve a problem, with first being the mapping of problem to the solution already known or which has a fixed set of solutions and action priorities defined to a problem previously. Now, if no solution is found or mapping is unsuccessful, then the second stage activates, where the solution from Deep Q-learning model is calculated. It estimates the best Q value and returns that solution or action which maximizes the reward at that instance. The reward function is designed with decreasing priorities and acts accordingly, where the users can change or define their priorities if needed. The case study and results show that the solution that is present in the paper will lead to solving ethical morality problems in self-driving cars up to a great extent.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Diuk C, Cohen A and Littman ML (2008) An object-oriented representation for efficient reinforcement learning. In: Proceedings of the 25th international conference on Machine learning, pp 240–247. ACM Diuk C, Cohen A and Littman ML (2008) An object-oriented representation for efficient reinforcement learning. In: Proceedings of the 25th international conference on Machine learning, pp 240–247. ACM
Zurück zum Zitat Dosovitskiy A, Ros G, Codevilla F, Lopez A and Koltun V (2017) CARLA: an open urban driving simulator. In: Conference on robot learning (pp 1–16). PMLR Dosovitskiy A, Ros G, Codevilla F, Lopez A and Koltun V (2017) CARLA: an open urban driving simulator. In: Conference on robot learning (pp 1–16). PMLR
Zurück zum Zitat Fridman L, Terwilliger J and Jenik B (2018) Deeptraffic: Crowdsourced hyperparameter tuning of deep reinforcement learning systems for multi-agent dense traffic navigation. arXiv preprint. arXiv:1801.02805 Fridman L, Terwilliger J and Jenik B (2018) Deeptraffic: Crowdsourced hyperparameter tuning of deep reinforcement learning systems for multi-agent dense traffic navigation. arXiv preprint. arXiv:​1801.​02805
Zurück zum Zitat Holstein T, Dodig-Crnkovic G, Pelliccione P (2018) Ethical and social aspects of self-driving cars. ARXIV’18, January 2018, Gothenburg, Sweden Holstein T, Dodig-Crnkovic G, Pelliccione P (2018) Ethical and social aspects of self-driving cars. ARXIV’18, January 2018, Gothenburg, Sweden
Zurück zum Zitat Kaelbling LP, Littman ML and Moore AW (1996) Reinforcement learning: a survey. CoRR, cs.AI/9605103 Kaelbling LP, Littman ML and Moore AW (1996) Reinforcement learning: a survey. CoRR, cs.AI/9605103
Zurück zum Zitat Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D and Wierstra D (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D and Wierstra D (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:​1509.​02971
Zurück zum Zitat Maas AL, Hannun AY, Ng AY (2013) Rectifier Nonlinearities improve neural network acoustic models. ICML Maas AL, Hannun AY, Ng AY (2013) Rectifier Nonlinearities improve neural network acoustic models. ICML
Zurück zum Zitat Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D and Riedmiller M (2013) Playing atari with deep reinforcement learning. arXiv preprint. arXiv:1312.5602 Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D and Riedmiller M (2013) Playing atari with deep reinforcement learning. arXiv preprint. arXiv:​1312.​5602
Zurück zum Zitat Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, GOstrovski et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533CrossRef Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, GOstrovski et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533CrossRef
Zurück zum Zitat Riedmiller M, Gabel T, Hafner R, Lange S (2009) Reinforcement learning for robot soccer. Auton Robot 27(1):55–73CrossRef Riedmiller M, Gabel T, Hafner R, Lange S (2009) Reinforcement learning for robot soccer. Auton Robot 27(1):55–73CrossRef
Zurück zum Zitat Shalev-Shwartz S, Shammah S and Shashua A (2016) Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint. arXiv:1610.03295 Shalev-Shwartz S, Shammah S and Shashua A (2016) Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint. arXiv:​1610.​03295
Zurück zum Zitat Siam M, Elkerdawy S, Jagersand M and Yogamani S (2017) Deep semantic segmentation for automated driving: taxonomy, roadmap and challenges. In: 2017 IEEE 20th international conference on intelligent transportation systems (ITSC) (pp 1–8). IEEE Siam M, Elkerdawy S, Jagersand M and Yogamani S (2017) Deep semantic segmentation for automated driving: taxonomy, roadmap and challenges. In: 2017 IEEE 20th international conference on intelligent transportation systems (ITSC) (pp 1–8). IEEE
Zurück zum Zitat Sutton RS, Barto AG (2015) Reinforcement learning: an introduction. A Bradford book. The MIT Press Cambridge, Cambridge Sutton RS, Barto AG (2015) Reinforcement learning: an introduction. A Bradford book. The MIT Press Cambridge, Cambridge
Zurück zum Zitat Tesauro G (1995) Temporal difference learning and td-gammon. Commun ACM 38(3):58–68CrossRef Tesauro G (1995) Temporal difference learning and td-gammon. Commun ACM 38(3):58–68CrossRef
Zurück zum Zitat Todorov E, Erez T and Tassa Y (2012) Mujoco: a physics engine for model-based control. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE Todorov E, Erez T and Tassa Y (2012) Mujoco: a physics engine for model-based control. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE
Zurück zum Zitat Urmson C, Whittaker WR (2008) Self-driving cars and the urban challenge. IEEE Intell Syst 23(2):66–68CrossRef Urmson C, Whittaker WR (2008) Self-driving cars and the urban challenge. IEEE Intell Syst 23(2):66–68CrossRef
Zurück zum Zitat Van Hasselt H, Guez A and Silver D (2016b) Deep reinforcement learning with double q-learning. In AAAI, pp 2094–2100 Van Hasselt H, Guez A and Silver D (2016b) Deep reinforcement learning with double q-learning. In AAAI, pp 2094–2100
Zurück zum Zitat Wang, P. and Chan, C.Y., 2017, October. Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) (pp 1–6). IEEE Wang, P. and Chan, C.Y., 2017, October. Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) (pp 1–6). IEEE
Zurück zum Zitat Watkins CJCH, Dayan P (1992) Q-learning. Mach Learn 8(3–4):279–292CrossRef Watkins CJCH, Dayan P (1992) Q-learning. Mach Learn 8(3–4):279–292CrossRef
Metadaten
Titel
Two-stage approach to solve ethical morality problem in self-driving cars
verfasst von
Akshat Chandak
Shailendra Aote
Aradhita Menghal
Urvi Negi
Shreyas Nemani
Shubham Jha
Publikationsdatum
27.06.2022
Verlag
Springer London
Erschienen in
AI & SOCIETY / Ausgabe 2/2024
Print ISSN: 0951-5666
Elektronische ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-022-01517-9

Weitere Artikel der Ausgabe 2/2024

AI & SOCIETY 2/2024 Zur Ausgabe

Premium Partner