Skip to main content
Top
Published in: AI & SOCIETY 2/2024

27-06-2022 | Open Forum

Two-stage approach to solve ethical morality problem in self-driving cars

Authors: Akshat Chandak, Shailendra Aote, Aradhita Menghal, Urvi Negi, Shreyas Nemani, Shubham Jha

Published in: AI & SOCIETY | Issue 2/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Ethical morality is one of the significant issues in self-driving cars. The paper provides a newer approach to solve the ethical decision problems in self-driving cars until there is no concrete ethical decision to all problems. This paper gives a two-way approach to solve a problem, with first being the mapping of problem to the solution already known or which has a fixed set of solutions and action priorities defined to a problem previously. Now, if no solution is found or mapping is unsuccessful, then the second stage activates, where the solution from Deep Q-learning model is calculated. It estimates the best Q value and returns that solution or action which maximizes the reward at that instance. The reward function is designed with decreasing priorities and acts accordingly, where the users can change or define their priorities if needed. The case study and results show that the solution that is present in the paper will lead to solving ethical morality problems in self-driving cars up to a great extent.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
go back to reference Diuk C, Cohen A and Littman ML (2008) An object-oriented representation for efficient reinforcement learning. In: Proceedings of the 25th international conference on Machine learning, pp 240–247. ACM Diuk C, Cohen A and Littman ML (2008) An object-oriented representation for efficient reinforcement learning. In: Proceedings of the 25th international conference on Machine learning, pp 240–247. ACM
go back to reference Dosovitskiy A, Ros G, Codevilla F, Lopez A and Koltun V (2017) CARLA: an open urban driving simulator. In: Conference on robot learning (pp 1–16). PMLR Dosovitskiy A, Ros G, Codevilla F, Lopez A and Koltun V (2017) CARLA: an open urban driving simulator. In: Conference on robot learning (pp 1–16). PMLR
go back to reference Fridman L, Terwilliger J and Jenik B (2018) Deeptraffic: Crowdsourced hyperparameter tuning of deep reinforcement learning systems for multi-agent dense traffic navigation. arXiv preprint. arXiv:1801.02805 Fridman L, Terwilliger J and Jenik B (2018) Deeptraffic: Crowdsourced hyperparameter tuning of deep reinforcement learning systems for multi-agent dense traffic navigation. arXiv preprint. arXiv:​1801.​02805
go back to reference Holstein T, Dodig-Crnkovic G, Pelliccione P (2018) Ethical and social aspects of self-driving cars. ARXIV’18, January 2018, Gothenburg, Sweden Holstein T, Dodig-Crnkovic G, Pelliccione P (2018) Ethical and social aspects of self-driving cars. ARXIV’18, January 2018, Gothenburg, Sweden
go back to reference Kaelbling LP, Littman ML and Moore AW (1996) Reinforcement learning: a survey. CoRR, cs.AI/9605103 Kaelbling LP, Littman ML and Moore AW (1996) Reinforcement learning: a survey. CoRR, cs.AI/9605103
go back to reference Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D and Wierstra D (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D and Wierstra D (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:​1509.​02971
go back to reference Maas AL, Hannun AY, Ng AY (2013) Rectifier Nonlinearities improve neural network acoustic models. ICML Maas AL, Hannun AY, Ng AY (2013) Rectifier Nonlinearities improve neural network acoustic models. ICML
go back to reference Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D and Riedmiller M (2013) Playing atari with deep reinforcement learning. arXiv preprint. arXiv:1312.5602 Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D and Riedmiller M (2013) Playing atari with deep reinforcement learning. arXiv preprint. arXiv:​1312.​5602
go back to reference Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, GOstrovski et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533CrossRef Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, GOstrovski et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533CrossRef
go back to reference Riedmiller M, Gabel T, Hafner R, Lange S (2009) Reinforcement learning for robot soccer. Auton Robot 27(1):55–73CrossRef Riedmiller M, Gabel T, Hafner R, Lange S (2009) Reinforcement learning for robot soccer. Auton Robot 27(1):55–73CrossRef
go back to reference Shalev-Shwartz S, Shammah S and Shashua A (2016) Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint. arXiv:1610.03295 Shalev-Shwartz S, Shammah S and Shashua A (2016) Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint. arXiv:​1610.​03295
go back to reference Siam M, Elkerdawy S, Jagersand M and Yogamani S (2017) Deep semantic segmentation for automated driving: taxonomy, roadmap and challenges. In: 2017 IEEE 20th international conference on intelligent transportation systems (ITSC) (pp 1–8). IEEE Siam M, Elkerdawy S, Jagersand M and Yogamani S (2017) Deep semantic segmentation for automated driving: taxonomy, roadmap and challenges. In: 2017 IEEE 20th international conference on intelligent transportation systems (ITSC) (pp 1–8). IEEE
go back to reference Sutton RS, Barto AG (2015) Reinforcement learning: an introduction. A Bradford book. The MIT Press Cambridge, Cambridge Sutton RS, Barto AG (2015) Reinforcement learning: an introduction. A Bradford book. The MIT Press Cambridge, Cambridge
go back to reference Tesauro G (1995) Temporal difference learning and td-gammon. Commun ACM 38(3):58–68CrossRef Tesauro G (1995) Temporal difference learning and td-gammon. Commun ACM 38(3):58–68CrossRef
go back to reference Todorov E, Erez T and Tassa Y (2012) Mujoco: a physics engine for model-based control. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE Todorov E, Erez T and Tassa Y (2012) Mujoco: a physics engine for model-based control. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026–5033. IEEE
go back to reference Urmson C, Whittaker WR (2008) Self-driving cars and the urban challenge. IEEE Intell Syst 23(2):66–68CrossRef Urmson C, Whittaker WR (2008) Self-driving cars and the urban challenge. IEEE Intell Syst 23(2):66–68CrossRef
go back to reference Van Hasselt H, Guez A and Silver D (2016b) Deep reinforcement learning with double q-learning. In AAAI, pp 2094–2100 Van Hasselt H, Guez A and Silver D (2016b) Deep reinforcement learning with double q-learning. In AAAI, pp 2094–2100
go back to reference Wang, P. and Chan, C.Y., 2017, October. Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) (pp 1–6). IEEE Wang, P. and Chan, C.Y., 2017, October. Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) (pp 1–6). IEEE
Metadata
Title
Two-stage approach to solve ethical morality problem in self-driving cars
Authors
Akshat Chandak
Shailendra Aote
Aradhita Menghal
Urvi Negi
Shreyas Nemani
Shubham Jha
Publication date
27-06-2022
Publisher
Springer London
Published in
AI & SOCIETY / Issue 2/2024
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-022-01517-9

Other articles of this Issue 2/2024

AI & SOCIETY 2/2024 Go to the issue

Premium Partner