Skip to main content
Erschienen in: Ethics and Information Technology 3/2022

01.09.2022 | Original Paper

Vicarious liability: a solution to a problem of AI responsibility?

verfasst von: Daniela Glavaničová, Matteo Pascucci

Erschienen in: Ethics and Information Technology | Ausgabe 3/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility for this wrong. In this article, we focus on a particular (aspect of the) AI responsibility gap: it seems fitting that someone should bear the legal consequences in scenarios involving AI machines with design defects; however, there seems to be no such fitting bearer. We approach this problem from the legal perspective, and suggest vicarious liability of AI manufacturers as a solution to this problem. Our proposal comes in two variants: the first one has a narrower range of application, but can be easily integrated in current legal frameworks; the second one requires a revision of current legal frameworks, but has a wider range of application. The latter variant employs a broadened account of vicarious liability. We emphasise strengths of the two variants and finally highlight how vicarious liability offers important insights for addressing a moral AI responsibility gap.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
We will mainly use the term AI machine in a very broad sense, meaning any machine equipped with a form of artificial intelligence whose behaviour may have normatively relevant consequences. In the following, we will encounter examples involving specific kinds of machines, such as robots and autonomous vehicles. While not everything that is said about one of these kinds automatically carries over to the others, much of the discussion that is relevant to AI responsibility would be lost if we were to exclude works that focus on robot responsibility or on specific kinds of AI machines from our analysis.
 
2
The philosophical debate on fittingness is vast (see, e.g., the survey in Howard, 2018). Nevertheless, for the purposes of the present work it is not necessary to assume a specific view; it is sufficient to have a simple understanding of this notion as an element introducing a normative perspective.
 
3
Since AI machines do not usually have decisional autonomy, one might instead say that there is ultimately human-AI collaboration with respect to decisions: at least at the current stage of AI technology, humans are always causally responsible for how a machine is initially programmed.
 
4
For an opposite view on the moral responsibility of robots, see Sullins (2011).
 
5
To give a few examples, in Bazley v Curry 2 SCR 534 (1999), Lister v Hesley Hall Ltd 1 AC 215 (2002), and Majrowski v Guy’s and St. Thomas’s NHS Trust UKHL 34 (2006), an employer was held vicariously responsible for the intentional wrongdoing of their employees (abuse, harassment, sexual misconduct).
 
6
We do not discuss permissions since, in our view, ascriptions of vicarious liability primarily deal with cases of norm violation.
 
Literatur
Zurück zum Zitat Asaro, P. M. (2012). A body to kick, but still no soul to damn: Legal perspectives on robotics. Robot ethics: The ethical and social implications of robotics (pp. 169–186). MIT Press. Asaro, P. M. (2012). A body to kick, but still no soul to damn: Legal perspectives on robotics. Robot ethics: The ethical and social implications of robotics (pp. 169–186). MIT Press.
Zurück zum Zitat Brodie, D. (2006). The enterprise and the borrowed worker. Industrial Law Journal, 35(1), 87–92.CrossRef Brodie, D. (2006). The enterprise and the borrowed worker. Industrial Law Journal, 35(1), 87–92.CrossRef
Zurück zum Zitat Brodie, D. (2007). Enterprise liability: Justifying vicarious liability. Oxford Journal of Legal Studies, 27(3), 493–508.CrossRef Brodie, D. (2007). Enterprise liability: Justifying vicarious liability. Oxford Journal of Legal Studies, 27(3), 493–508.CrossRef
Zurück zum Zitat Chesterman, S. (2021). We, the robots? Regulating artificial intelligence and the limits of the law. Cambridge University Press.CrossRef Chesterman, S. (2021). We, the robots? Regulating artificial intelligence and the limits of the law. Cambridge University Press.CrossRef
Zurück zum Zitat Coeckelbergh, M. (2020a). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068.CrossRef Coeckelbergh, M. (2020a). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068.CrossRef
Zurück zum Zitat Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.CrossRef Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.CrossRef
Zurück zum Zitat Giliker, P. (2010). Vicarious liability in tort: A comparative perspective. Cambridge University Press.CrossRef Giliker, P. (2010). Vicarious liability in tort: A comparative perspective. Cambridge University Press.CrossRef
Zurück zum Zitat Gray, A. (2018). Vicarious liability: Critique and reform. Hart Publishing.CrossRef Gray, A. (2018). Vicarious liability: Critique and reform. Hart Publishing.CrossRef
Zurück zum Zitat Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22, 307–320.CrossRef Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22, 307–320.CrossRef
Zurück zum Zitat Gurney, J. (2017). Applying a reasonable driver standard to accidents caused by autonomous vehicles. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 51–65). Oxford University Press. Gurney, J. (2017). Applying a reasonable driver standard to accidents caused by autonomous vehicles. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 51–65). Oxford University Press.
Zurück zum Zitat Hakli, R., & Mäkelä, P. (2019). Moral responsibility of robots and hybrid agents. The Monist, 102(2), 259–275.CrossRef Hakli, R., & Mäkelä, P. (2019). Moral responsibility of robots and hybrid agents. The Monist, 102(2), 259–275.CrossRef
Zurück zum Zitat Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.CrossRef Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.CrossRef
Zurück zum Zitat Hyman, J. (2015). Action, knowledge, and will. Oxford University Press.CrossRef Hyman, J. (2015). Action, knowledge, and will. Oxford University Press.CrossRef
Zurück zum Zitat Köhler, S., Roughley, N., & Sauer, H. (2017). Technologically blurred accountability? Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In C. Ulbert, P. Finkenbusch, E. Sondermann, & T. Debiel (Eds.), Moral agency and the politics of responsibility (pp. 51–68). Routledge. Köhler, S., Roughley, N., & Sauer, H. (2017). Technologically blurred accountability? Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In C. Ulbert, P. Finkenbusch, E. Sondermann, & T. Debiel (Eds.), Moral agency and the politics of responsibility (pp. 51–68). Routledge.
Zurück zum Zitat Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press. Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University Press.
Zurück zum Zitat Loh, W., & Loh, J. (2017). Autonomy and responsibility in hybrid systems. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 35–50). Oxford University Press. Loh, W., & Loh, J. (2017). Autonomy and responsibility in hybrid systems. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 35–50). Oxford University Press.
Zurück zum Zitat Magnet, J. (2015). Vicarious liability and the professional employee. Canadian Cases on the Law of Torts, 6, 208–226. Magnet, J. (2015). Vicarious liability and the professional employee. Canadian Cases on the Law of Torts, 6, 208–226.
Zurück zum Zitat Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.CrossRef Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.CrossRef
Zurück zum Zitat Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Online first in Philosophy & technology. Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Online first in Philosophy & technology.
Zurück zum Zitat Sullins, J. P. (2011). When is a robot a moral agent. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 151–161). Cambridge University Press.CrossRef Sullins, J. P. (2011). When is a robot a moral agent. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 151–161). Cambridge University Press.CrossRef
Zurück zum Zitat Tigard, D. W. (2020). There is no techno-responsibility gap. Online first in Philosophy & technology. Tigard, D. W. (2020). There is no techno-responsibility gap. Online first in Philosophy & technology.
Zurück zum Zitat Turner, J. (2019). Robot rules: Regulating artificial intelligence. Palgrave Macmillan.CrossRef Turner, J. (2019). Robot rules: Regulating artificial intelligence. Palgrave Macmillan.CrossRef
Zurück zum Zitat White, T. N., & Baum, S. D. (2017). Liability for present and future robotics technology. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 66–79). Oxford University Press. White, T. N., & Baum, S. D. (2017). Liability for present and future robotics technology. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (pp. 66–79). Oxford University Press.
Zurück zum Zitat Wu, S. S. (2016). Product liability issues in the US and associated risk management. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous driving (pp. 553–569). Springer. Wu, S. S. (2016). Product liability issues in the US and associated risk management. In M. Maurer, J. C. Gerdes, B. Lenz, & H. Winner (Eds.), Autonomous driving (pp. 553–569). Springer.
Metadaten
Titel
Vicarious liability: a solution to a problem of AI responsibility?
verfasst von
Daniela Glavaničová
Matteo Pascucci
Publikationsdatum
01.09.2022
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 3/2022
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-022-09657-8

Weitere Artikel der Ausgabe 3/2022

Ethics and Information Technology 3/2022 Zur Ausgabe

Premium Partner