Skip to main content
Erschienen in: Ethics and Information Technology 1/2018

27.01.2018 | Original Paper

Embedded ethics: some technical and ethical challenges

verfasst von: Vincent Bonnemains, Claire Saurel, Catherine Tessier

Erschienen in: Ethics and Information Technology | Ausgabe 1/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper pertains to research works aiming at linking ethics and automated reasoning in autonomous machines. It focuses on a formal approach that is intended to be the basis of an artificial agent’s reasoning that could be considered by a human observer as an ethical reasoning. The approach includes some formal tools to describe a situation and models of ethical principles that are designed to automatically compute a judgement on possible decisions that can be made in a given situation and explain why a given decision is ethically acceptable or not. It is illustrated on three ethical frameworks—utilitarian ethics, deontological ethics and the Doctrine of Double effect whose formal models are tested on ethical dilemmas so as to examine how they respond to those dilemmas and to highlight the issues at stake when a formal approach to ethical concepts is considered. The whole approach is instantiated on the drone dilemma, a thought experiment we have designed; this allows the discrepancies that exist between the judgements of the various ethical frameworks to be shown. The final discussion allows us to highlight the different sources of subjectivity of the approach, despite the fact that concepts are expressed in a more rigorous way than in natural language: indeed, the formal approach enables subjectivity to be identified and located more precisely.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
A formal approach consists in defining a minimal set of concepts that is necessary to deal with ethical reasoning. A language is defined upon this set of concepts in order to compute ethical reasoning with automatic methods. A formal approach requires to disambiguate natural language to get pseudo-mathematical definitions, in order to provide computable meaningful results. Such an approach also requires to identify implicit hypotheses.
 
2
In order to deal with an ethical dilemma, an autonomous machine has to be able to identify a situation as such. Despite the fact that some concepts presented in this paper might help automated ethical dilemma recognition, this issue will not be discussed further.
 
3
An implicit ethical machine is a machine which is designed to avoid any situation involving ethical issues (Moor 2006).
 
4
This is a strong assumption we make in order to avoid additional ethical concerns about judging and comparing values of lives.
 
5
Assumptions are required in order to translate ethical notions into computable concepts.
 
6
This rule has several meanings. One of the meanings involves the concept of intention: negative facts are not deliberate. Because our formalism does not involve intention (yet), we make the simplifying assumption that an agent never wishes negative facts to happen.
 
7
Assuming that any life is equal to another.
 
8
In this case, both aggregation criteria give the same result.
 
Literatur
Zurück zum Zitat Anderson, M., & Anderson, S. L. (2015). Toward ensuring ethical behavior from autonomous systems: A case-supported principle-based paradigm. Industrial Robot: An International Journal, 42(4), 324–331.CrossRef Anderson, M., & Anderson, S. L. (2015). Toward ensuring ethical behavior from autonomous systems: A case-supported principle-based paradigm. Industrial Robot: An International Journal, 42(4), 324–331.CrossRef
Zurück zum Zitat Anderson, M., Anderson, S. L., Armen, C. (2005). MedEthEx: Towards a Medical Ethics Advisor. In Proceedings of the AAAI Fall Symposium on Caring Machines: AI and Eldercare. Anderson, M., Anderson, S. L., Armen, C. (2005). MedEthEx: Towards a Medical Ethics Advisor. In Proceedings of the AAAI Fall Symposium on Caring Machines: AI and Eldercare.
Zurück zum Zitat Anderson, S. L., & Anderson, M. (2011). A Prima Facie Duty Approach to Machine Ethics and Its Application to Elder Care. In Proceedings of the 12th AAAI Conference on Human-Robot Interaction in Elder Care, AAAI Press, AAAIWS’11-12, pp. 2–7. Anderson, S. L., & Anderson, M. (2011). A Prima Facie Duty Approach to Machine Ethics and Its Application to Elder Care. In Proceedings of the 12th AAAI Conference on Human-Robot Interaction in Elder Care, AAAI Press, AAAIWS’11-12, pp. 2–7.
Zurück zum Zitat Arkin, R. C. (2007). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report, Proc. HRI 2008. Arkin, R. C. (2007). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report, Proc. HRI 2008.
Zurück zum Zitat Aroskar, M. A. (1980). Anatomy of an ethical dilemma: The theory. The American Journal of Nursing, 80(4), 658–660. Aroskar, M. A. (1980). Anatomy of an ethical dilemma: The theory. The American Journal of Nursing, 80(4), 658–660.
Zurück zum Zitat Atkinson, K., & Bench-Capon, T. (2016). Value based reasoning and the actions of others. In Proceedings of ECAI, The Hague, The Netherlands. Atkinson, K., & Bench-Capon, T. (2016). Value based reasoning and the actions of others. In Proceedings of ECAI, The Hague, The Netherlands.
Zurück zum Zitat Baron, J. (1998). Judgment misguided: Intuition and error in public decision making. Oxford: Oxford University Press. Baron, J. (1998). Judgment misguided: Intuition and error in public decision making. Oxford: Oxford University Press.
Zurück zum Zitat Beauchamp, T. L., & Childress, J. F. (1979). Principles of biomedical ethics. Oxford: Oxford University Press. Beauchamp, T. L., & Childress, J. F. (1979). Principles of biomedical ethics. Oxford: Oxford University Press.
Zurück zum Zitat Berreby, F., Bourgne, G., & Ganascia, J. G. (2015). Modelling moral reasoning and ethical responsibility with logic programming.In Logic for Programming, Artificial Intelligence, and Reasoning: 20th International Conference (LPAR-20) (pp. 532–548). Fiji: Suja. Berreby, F., Bourgne, G., & Ganascia, J. G. (2015). Modelling moral reasoning and ethical responsibility with logic programming.In Logic for Programming, Artificial Intelligence, and Reasoning: 20th International Conference (LPAR-20) (pp. 532–548). Fiji: Suja.
Zurück zum Zitat Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.CrossRef Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.CrossRef
Zurück zum Zitat Bonnemains, V., Saurel, C., & Tessier, C. (2016). How ethical frameworks answer to ethical dilemmas: Towards a formal model. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands. Bonnemains, V., Saurel, C., & Tessier, C. (2016). How ethical frameworks answer to ethical dilemmas: Towards a formal model. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.
Zurück zum Zitat Bringsjord, S., & Taylor, J. (2011). The divine-command approach to robot ethics. In P. Lin, G. Bekey & K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics, Cambridge: MIT Press, pp. 85–108. Bringsjord, S., & Taylor, J. (2011). The divine-command approach to robot ethics. In P. Lin, G. Bekey & K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics, Cambridge: MIT Press, pp. 85–108.
Zurück zum Zitat Bringsjord, S., Ghosh, R., Payne-Joyce, J., et al. (2016). Deontic counteridenticals. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands. Bringsjord, S., Ghosh, R., Payne-Joyce, J., et al. (2016). Deontic counteridenticals. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.
Zurück zum Zitat Cayrol, C., Royer, V., Saurel, C. (1993). Management of preferences in assumption-based reasoning. In 4th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 13–22. Cayrol, C., Royer, V., Saurel, C. (1993). Management of preferences in assumption-based reasoning. In 4th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 13–22.
Zurück zum Zitat Cointe, N., Bonnet, G., & Boissier, O. (2016). Ethical Judgment of Agents Behaviors in Multi-Agent Systems. In Autonomous Agents and Multiagent Systems International Conference (AAMAS), Singapore. Cointe, N., Bonnet, G., & Boissier, O. (2016). Ethical Judgment of Agents Behaviors in Multi-Agent Systems. In Autonomous Agents and Multiagent Systems International Conference (AAMAS), Singapore.
Zurück zum Zitat Conitzer, V., Sinnott-Armstrong, W., Schaich Borg, J., Deng, Y., & Kramer, M. (2017). Moral decision making frameworks for artificial intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA. Conitzer, V., Sinnott-Armstrong, W., Schaich Borg, J., Deng, Y., & Kramer, M. (2017). Moral decision making frameworks for artificial intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA.
Zurück zum Zitat Defense Science Board. (2016). Summer study on autonomy. Technical Report, US Department of Defense. Defense Science Board. (2016). Summer study on autonomy. Technical Report, US Department of Defense.
Zurück zum Zitat Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15. Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15.
Zurück zum Zitat Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 16(4), 263.CrossRef Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 16(4), 263.CrossRef
Zurück zum Zitat Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. Technical report for the U.S. Department of the Navy. Office of Naval Research. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. Technical report for the U.S. Department of the Navy. Office of Naval Research.
Zurück zum Zitat Lin, P., Abney, K., & Bekey, G. (Eds.). (2012). Robot ethics—The Ethical and Social Implications of Robotics. Cambridge: The MIT Press. Lin, P., Abney, K., & Bekey, G. (Eds.). (2012). Robot ethics—The Ethical and Social Implications of Robotics. Cambridge: The MIT Press.
Zurück zum Zitat MacIntyre, A. (2003). A short history of ethics: A history of moral philosophy from the Homeric age to the 20th century. Abingdon: Routledge. MacIntyre, A. (2003). A short history of ethics: A history of moral philosophy from the Homeric age to the 20th century. Abingdon: Routledge.
Zurück zum Zitat Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, ACM, pp. 117–124. Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, ACM, pp. 117–124.
Zurück zum Zitat McIntyre, A. (2014). Doctrine of double effect. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). California: The Stanford Encyclopedia of Philosophy. McIntyre, A. (2014). Doctrine of double effect. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). California: The Stanford Encyclopedia of Philosophy.
Zurück zum Zitat Mermet, B., & Simon, G. (2016). Formal verification of ethical properties in multiagent systems. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands. Mermet, B., & Simon, G. (2016). Formal verification of ethical properties in multiagent systems. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.
Zurück zum Zitat Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.CrossRef Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.CrossRef
Zurück zum Zitat Oswald, M. E., & Grosjean, S. (2004). Confirmation bias. In R. Pohl (Ed.) Cognitive illusions: A handbook on fallacies and biases in thinking, judgement and memory (p. 79). New York: Psychology Press. Oswald, M. E., & Grosjean, S. (2004). Confirmation bias. In R. Pohl (Ed.) Cognitive illusions: A handbook on fallacies and biases in thinking, judgement and memory (p. 79). New York: Psychology Press.
Zurück zum Zitat Pagallo U (2016) Even Angels Need the Rules: AI, Roboethics, and the Law. In Proceedings of ECAI, The Hague, The Netherlands. Pagallo U (2016) Even Angels Need the Rules: AI, Roboethics, and the Law. In Proceedings of ECAI, The Hague, The Netherlands.
Zurück zum Zitat Pinto, J., & Reiter, R. (1993). Temporal reasoning in logic programming: A case for the situation calculus. ICLP, 93, 203–221.MathSciNet Pinto, J., & Reiter, R. (1993). Temporal reasoning in logic programming: A case for the situation calculus. ICLP, 93, 203–221.MathSciNet
Zurück zum Zitat Pnueli, A. (1977). The temporal logic of programs. In 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pp. 46–57. Pnueli, A. (1977). The temporal logic of programs. In 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pp. 46–57.
Zurück zum Zitat Reiter, R. (1978). On closed world data bases. In H. Gallaire & J. Minker (Eds.), Logic and data bases (pp. 119–140). New York: Plenum Press. Reiter, R. (1978). On closed world data bases. In H. Gallaire & J. Minker (Eds.), Logic and data bases (pp. 119–140). New York: Plenum Press.
Zurück zum Zitat Ricoeur, P. (1990). Éthique et morale. Revista Portuguesa de Filosofia, 4(1), 5–17. Ricoeur, P. (1990). Éthique et morale. Revista Portuguesa de Filosofia, 4(1), 5–17.
Zurück zum Zitat Santos-Lang, C. (2002). Ethics for Artificial Intelligences. In Wisconsin State-Wide technology Symposium "Promise or Peril?”. Wisconsin, USA: Reflecting on computer technology: Educational, psychological, and ethical implications. Santos-Lang, C. (2002). Ethics for Artificial Intelligences. In Wisconsin State-Wide technology Symposium "Promise or Peril?”. Wisconsin, USA: Reflecting on computer technology: Educational, psychological, and ethical implications.
Zurück zum Zitat Sinnott-Armstrong, W. (2015). Consequentialism. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). Stanford: Metaphysics Research Lab: Stanford University. Sinnott-Armstrong, W. (2015). Consequentialism. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). Stanford: Metaphysics Research Lab: Stanford University.
Zurück zum Zitat Sullins, J. (2010). RoboWarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263–275.CrossRef Sullins, J. (2010). RoboWarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263–275.CrossRef
Zurück zum Zitat Tessier, C., & Dehais, F. (2012). Authority management and conflict solving in human-machine systems. AerospaceLab: The Onera Journal, 4, 1. Tessier, C., & Dehais, F. (2012). Authority management and conflict solving in human-machine systems. AerospaceLab: The Onera Journal, 4, 1.
Zurück zum Zitat The EthicAA team (2015). Dealing with ethical conflicts in autonomous agents and multi-agent systems. In AAAI 2015 Workshop on AI and Ethics, Austin, Texas, USA The EthicAA team (2015). Dealing with ethical conflicts in autonomous agents and multi-agent systems. In AAAI 2015 Workshop on AI and Ethics, Austin, Texas, USA
Zurück zum Zitat Tzafestas, S. (2016). Roboethics: A navigating overview. Oxford: Oxford University Press.CrossRef Tzafestas, S. (2016). Roboethics: A navigating overview. Oxford: Oxford University Press.CrossRef
Zurück zum Zitat von Wright, G. H. (1951). Deontic logic. In Mind, Vol. 60, jstor, pp. 1–15. von Wright, G. H. (1951). Deontic logic. In Mind, Vol. 60, jstor, pp. 1–15.
Zurück zum Zitat Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.CrossRef Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.CrossRef
Zurück zum Zitat Woodfield, A. (1976). Teleology. Cambridge: Cambridge University Press. Woodfield, A. (1976). Teleology. Cambridge: Cambridge University Press.
Zurück zum Zitat Yilmaz, L., Franco-Watkins, A., Kroecker, T. S. (2016). Coherence-driven reflective equilibrium model of ethical decision-making. In IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 42–48. https://doi.org/10.1109/COGSIMA.2016.7497784. Yilmaz, L., Franco-Watkins, A., Kroecker, T. S. (2016). Coherence-driven reflective equilibrium model of ethical decision-making. In IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 42–48. https://​doi.​org/​10.​1109/​COGSIMA.​2016.​7497784.
Metadaten
Titel
Embedded ethics: some technical and ethical challenges
verfasst von
Vincent Bonnemains
Claire Saurel
Catherine Tessier
Publikationsdatum
27.01.2018
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 1/2018
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-018-9444-x

Weitere Artikel der Ausgabe 1/2018

Ethics and Information Technology 1/2018 Zur Ausgabe