Skip to main content
Erschienen in: Ethics and Information Technology 4/2022

01.12.2022 | Original Paper

No wheel but a dial: why and how passengers in self-driving cars should decide how their car drives

verfasst von: Johannes Himmelreich

Erschienen in: Ethics and Information Technology | Ausgabe 4/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Much of the debate on the ethics of self-driving cars has revolved around trolley scenarios. This paper instead takes up the political or institutional question of who should decide how a self-driving car drives. Specifically, this paper is on the question of whether and why passengers should be able to control how their car drives. The paper reviews existing arguments—those for passenger ethics settings and for mandatory ethics settings respectively—and argues that they fail. Although the arguments are not successful, they serve as the basis to formulate desiderata that any approach to regulating the driving behavior of self-driving cars ought to fulfill. The paper then proposes one way of designing passenger ethics settings that meets these desiderata.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
By “self-driving cars,” “autonomous vehicles” or “automated vehicles” (AV) I understand individually-owned passenger vehicles with automation levels 4 or higher according to the SAE definition. I concentrate on cars owned by individuals, in contrast to corporate-owned cars.
 
2
For arguments in favor of the relevance of trolley scenarios, however, see Lin (2017), Keeling (2020) and Awad et al. (2020)
 
3
The nomenclature is from Gogoll and Müller (2017). The distinction between PES and MES depends on whether a passenger can meaningfully control a vehicle’s driving style and macro path planning. The expression “meaningful control” is central to the ethics of robotics.
 
4
In addition to arguments that address PES directly, I also review related arguments that can be applied to the issue of PES (Bonnefon et al., 2016; Millar, 2014a; 2015).
 
5
My discussion here is prompted by comments by a peer reviewer for a different journal.
 
6
Tesla’s cost function for path planning minimizes traversal time, collision risk, lateral acceleration, and lateral jerk—the latter as a measure of comfort (Tesla, 2021). The behavior of Teslas is hence governed via deliberately designed properties of the cost function.
 
7
Technical and normative issues are not independent: Technological choices constrain the ethics of a system. This is an important insight in the value-alignment literature (cf. Gabriel, 2020), of which the debate on the ethics of self-driving cars can be seen as a part.
 
8
Things are actually more complicated because it is not clear whose proxy the cars ought to be—there is thus a “moral proxy problem” (Thoma, 2022). Depending on whether cars are proxies for individuals or aggregates (such as developers or regulators), they should make risky decisions very differently (ibid.).
 
9
What these limits should be and what considerations should guide our delineation of limits is often not clear. But see Contissa et al., (2017, p. 374) and Etzioni and Etzioni (2017).
 
10
Of course, there could be a collective decision in favor of PES; but this is not how PES are usually defended.
 
11
I take the name for this argument from the title of a paper by Bonnefon et al. (2016), who present the empirical finding that motivates the argument that I present here (The main idea in the argument is also called the “ethical opt-out problem” (Bonnefon et al., 2020)). However—to avoid misattribution—the argument I present here is not theirs. The argument is hinted at by Contissa et al., (2017, p. 367) who write that “[i]f an impartial (utilitarian) ethical setting is made compulsory for, and rigidly implemented into, all AVs, many people may refuse to use AVs, even though AVs may have significant advantages, in particular with regard to safety, over human-driven vehicles.” Bonnefon et al. (2020, p. 110), however, advance a similar argument. They write: “[I]f people are not satisfied with the ethical principles that guide moral algorithms, they will simply opt out of using these algorithms, thus nullifying all their expected benefits.”.
 
12
Similarly, Ryan (2020) writes: “Very few people would buy [a self-driving car] if they prioritised the lives of others over the vehicle’s driver and passengers.”.
 
13
The social dilemma argument is motivated by an empirical finding: Although a majority of people agree that a driving style that maximizes overall welfare or health in a population is the preferable driving style from a moral point of view, many people would not actually want to use or buy a vehicle that drives in this way (Bonnefon et al., 2016; Gill 2021). This is the social dilemma.
 
14
What I describe is only an extreme version of an egoistic car. In fact, as has been argued, there could be a continuum (Contissa et al., 2017).
 
15
A prisoners dilemma is a two-person symmetric game with two pure strategies, “cooperate” and “defect”, in which the payoffs of the four different outcomes satisfy the condition T > R > P > S, that is, temptation to defect against a cooperator has a strictly greater payoff than reward of mutual cooperation, punishment for mutual defection, and the so-called sucker payoff for cooperating with a defector.
 
16
This is acknowledged by some (Bonnefon et al., 2016).
 
17
Respondents in China would find it “tolerable” if self-driving cars are four to five times as safe as human drivers and “acceptable” if the cars were safer by one to two orders of magnitude (Liu et al., 2019).
 
18
For context: These are data from US participants. US participants can be expected to have relatively unfavorable attitudes towards AVs compared to India or China. A study in 2014 found that only 14% to 22% of respondents in the UK and US respectively hold very positive attitudes towards automated vehicles compared to 46% and 50% in India and China (Schoettle and Sivak, 2014).
 
19
The Kelley Blue Book calls these “value shoppers” (KBB Editors, 2022).
 
20
This is not a crucial assumption: Even if the nominal insurance costs might be higher, especially in the short term, they could be decreased by policy to make self-driving cars attractive (Ravid, 2014).
 
21
Moreover, it would likely take decades to be able to have sufficient exposure to measure (as opposed to simulate or estimate) the safety of self-driving cars (Kalra & Paddock, 2016).
 
22
I concentrate on this argument because it is a recent and the best developed one.
 
23
By “best interest of society” the authors mean that traffic injuries and fatalities are minimized in a given population.
 
24
This differs from the social dilemma argument which assumed that purchasing decisions are a PD instead of traffic being a PD.
 
25
I write “emerge” and “stable” to indicate that the game is played repeatedly. Even if players will not cooperate in one-shot games, the prospects for achieving widespread cooperation look much better when PD is played repeatedly.
 
26
It could be said that the traffic game is embedded in other games within the political structure.
 
27
Of course, also MES could incorporate a concern for pluralism. But, arguably, PES are more responsive to occupants’ preferences. On PES, the average distance between behavior and preference will likely be narrower than on MES.
 
28
Another illustration of this conflict between others’ interest and your interest is, of course, in trolley cases and collision scenarios such as in the Tunnel Problem where a car needs to choose between running over a pedestrian or running the car into the wall of a tunnel (Millar, 2014a).
 
29
By “mobility” I understand the time required to get to a destination. By “safety” I understand the absence of risk, defined as a function of the probability of a hazardous event and the harm to the occupants and others. It should be noted that I understand both “mobility” and “safety” impartially as everyone’s mobility and safety and not just those of vehicle occupants.
 
30
Assume also that this situation occurs in a location that does not prescribe a minimum lateral distance for safe passing.
 
31
Of course, the details of this would have to be worked out by operationalizing these value conflicts and by studying the user interaction design (cf. Thornton et al., 2019).
 
32
This is a matter of how the one dial trades off between the mobility–safety conflict and the other for the self-interest–other-interest conflict. How the one dial makes this tradeoff—the path of the indifference curve though the space of parameter combinations—is an important question for ethics and design.
 
33
Another problem with this objection is that it considers frequency but not stakes. It might be true that there are more opportunities for mobility and few for safety. But the stakes for safety might be much higher than those for mobility: Safety is about avoiding injuries and physical harms but mobility only about getting to a destination faster.
 
34
Shariff et al. (2017) discuss the importance of “virtue signalling”, however, not in the context of PES but instead as a psychological mechanism to exploit (in advertisement and communication) to increase AV adoption.
 
Literatur
Zurück zum Zitat Alexander, J. M. (2007). The structural evolution of morality. Cambridge University Press.CrossRef Alexander, J. M. (2007). The structural evolution of morality. Cambridge University Press.CrossRef
Zurück zum Zitat Arpaly, N. (2004). Unprincipled virtue: An inquiry into moral agency. Oxford University Press. Arpaly, N. (2004). Unprincipled virtue: An inquiry into moral agency. Oxford University Press.
Zurück zum Zitat Axelrod, R. (2009). The evolution of cooperation (Revised). Basic Books.MATH Axelrod, R. (2009). The evolution of cooperation (Revised). Basic Books.MATH
Zurück zum Zitat Basl, J., & Behrends, J. (2020). Why everyone has it wrong about the ethics of autonomous vehicles. In Frontiers of engineering: Reports on leading-edge engineering from the 2019 symposium. National Academies Press. https://doi.org/10.17226/25620. Basl, J., & Behrends, J. (2020). Why everyone has it wrong about the ethics of autonomous vehicles. In Frontiers of engineering: Reports on leading-edge engineering from the 2019 symposium. National Academies Press. https://​doi.​org/​10.​17226/​25620.
Zurück zum Zitat Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2020). The moral psychology of AI and the ethical opt-out problem. In S. Matthew Liao (Ed.), Ethics of artificial intelligence (pp. 109–126). Oxford University Press.CrossRef Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2020). The moral psychology of AI and the ethical opt-out problem. In S. Matthew Liao (Ed.), Ethics of artificial intelligence (pp. 109–126). Oxford University Press.CrossRef
Zurück zum Zitat Gabriel, I. (2022). Towards a theory of justice for artificial intelligence. Dædalus, 151(2), 218–231. Gabriel, I. (2022). Towards a theory of justice for artificial intelligence. Dædalus, 151(2), 218–231.
Zurück zum Zitat Gerdes, J. C., Thornton, S. M., & Millar, J. (2019). Designing automated vehicles around human values. In G. Meyer & S. Beiker (eds) Road vehicle automation 6 (pp. 39–48). Lecture Notes in Mobility. Springer. Gerdes, J. C., Thornton, S. M., & Millar, J. (2019). Designing automated vehicles around human values. In G. Meyer & S. Beiker (eds) Road vehicle automation 6 (pp. 39–48). Lecture Notes in Mobility. Springer.
Zurück zum Zitat Harper, C. D., Hendrickson, C. T., Mangones, S., & Samaras, C. (2016). Estimating potential increases in travel with autonomous vehicles for the non-driving, elderly and people with travel-restrictive medical conditions. Transportation Research Part c: Emerging Technologies, 72(November), 1–9. https://doi.org/10.1016/j.trc.2016.09.003CrossRef Harper, C. D., Hendrickson, C. T., Mangones, S., & Samaras, C. (2016). Estimating potential increases in travel with autonomous vehicles for the non-driving, elderly and people with travel-restrictive medical conditions. Transportation Research Part c: Emerging Technologies, 72(November), 1–9. https://​doi.​org/​10.​1016/​j.​trc.​2016.​09.​003CrossRef
Zurück zum Zitat Kalra, N., & Paddock, S. M. (2016). Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Research reports. Santa Monica, CA: RAND Corporation. https://doi.org/10.7249/RR1478 Kalra, N., & Paddock, S. M. (2016). Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Research reports. Santa Monica, CA: RAND Corporation. https://​doi.​org/​10.​7249/​RR1478
Zurück zum Zitat Keeling, G., Evans, K., Thornton, S. M., Mecacci, G., & de Sio, F. S. (2019). Four perspectives on what matters for the ethics of automated vehicles. In G. Meyer & S. Beiker (Eds.), Road vehicle automation 6 (pp. 49–60). Lecture Notes in Mobility. Springer. Keeling, G., Evans, K., Thornton, S. M., Mecacci, G., & de Sio, F. S. (2019). Four perspectives on what matters for the ethics of automated vehicles. In G. Meyer & S. Beiker (Eds.), Road vehicle automation 6 (pp. 49–60). Lecture Notes in Mobility. Springer.
Zurück zum Zitat Millar, J. (2017). Ethics settings for autonomous vehicles. In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence (pp. 20–34). Oxford University Press. Millar, J. (2017). Ethics settings for autonomous vehicles. In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence (pp. 20–34). Oxford University Press.
Zurück zum Zitat NHTSA. (1995). Synthesis report: Examination of target vehicular crashes and potential ITS countermeasures. DOT HS 808 263. United States Department of Transportation. NHTSA. (1995). Synthesis report: Examination of target vehicular crashes and potential ITS countermeasures. DOT HS 808 263. United States Department of Transportation.
Zurück zum Zitat NHTSA. (2017). Automated driving systems 2.0: A vision for safety. United States Department of Transportation. NHTSA. (2017). Automated driving systems 2.0: A vision for safety. United States Department of Transportation.
Zurück zum Zitat Ravid, O. (2014). Don’t sue me, I was just lawfully texting & drunk when my autonomous car crashing into you. Southwestern Law Review, 44(1), 175–208. Ravid, O. (2014). Don’t sue me, I was just lawfully texting & drunk when my autonomous car crashing into you. Southwestern Law Review, 44(1), 175–208.
Zurück zum Zitat Soltanzadeh, S., Galliott, J., & Jevglevskaja, N. (2020). Customizable ethics settings for building resilience and narrowing the responsibility gap: Case studies in the socio-ethical engineering of autonomous systems. Science and Engineering Ethics, 26, 2693–2708. https://doi.org/10.1007/s11948-020-00221-5CrossRef Soltanzadeh, S., Galliott, J., & Jevglevskaja, N. (2020). Customizable ethics settings for building resilience and narrowing the responsibility gap: Case studies in the socio-ethical engineering of autonomous systems. Science and Engineering Ethics, 26, 2693–2708. https://​doi.​org/​10.​1007/​s11948-020-00221-5CrossRef
Zurück zum Zitat Susskind, J. (2018). Future politics: Living together in a world transformed by tech. Oxford University Press. Susskind, J. (2018). Future politics: Living together in a world transformed by tech. Oxford University Press.
Zurück zum Zitat Thoma, J. (2022). Risk imposition by artificial agents: The moral proxy problem. In S. Vöneky, P. Kellmeyer, O. Müller, & W. Burgard (Eds.), The Cambridge handbook of responsible artificial intelligence: Interdisciplinary perspectives. Cambridge University Press. Thoma, J. (2022). Risk imposition by artificial agents: The moral proxy problem. In S. Vöneky, P. Kellmeyer, O. Müller, & W. Burgard (Eds.), The Cambridge handbook of responsible artificial intelligence: Interdisciplinary perspectives. Cambridge University Press.
Metadaten
Titel
No wheel but a dial: why and how passengers in self-driving cars should decide how their car drives
verfasst von
Johannes Himmelreich
Publikationsdatum
01.12.2022
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 4/2022
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-022-09668-5

Weitere Artikel der Ausgabe 4/2022

Ethics and Information Technology 4/2022 Zur Ausgabe

Premium Partner