Skip to main content
Erschienen in: AI & SOCIETY 1/2021

29.06.2020 | Open Forum

From machine ethics to computational ethics

verfasst von: Samuel T. Segun

Erschienen in: AI & SOCIETY | Ausgabe 1/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value and indeed is an important frontier in developing ethical artificial intelligence systems (AIS). I also show that computational is a distinct, often neglected field in the ethics of AI. In contrast to much of the literature, I argue that the appellation ‘machine ethics’ does not sufficiently capture the entire project of embedding ethics into AI/S, and hence the need for computational ethics. This essay is unique for two reasons; first, it offers a philosophical analysis of the subject of computational ethics that is not found in the literature. Second, it offers a finely grained analysis that shows the thematic distinction among robot ethics, machine ethics and computational ethics.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
As is well known, Asimov’s three laws of robotics are as follows: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The fourth law, which is also referred to as the zeroth law states that a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
 
2
Clarke identifies certain constraints to Asimov’s laws of robotics, which would make it computationally difficult to implement. These are the ambiguity and cultural dependence of terms used in the formulation of the laws; the role of judgment in decision-making, which would be quite tricky to implement given the degree of programming required in decision-making; the sheer complexity, this also bothers on having to account for all possible scenarios; the scope for dilemma and deadlock, the robot autonomy, audit of robot compliance, and scope of adaptation.
 
3
Smith and Anderson in the 2014 published Pew Research titled “AI, Robotics, and the Future of Jobs”, discuss the economic and social impact of AI on society. As we continue to build more autonomous intelligent systems, we are likely to delegate responsibilities around security, environment, healthcare, food production etc. to these systems. These all raise concerns about the impact of AI on jobs and society.
 
4
Closely linked to the moral issue with AI is the debate around its legal status, agency, and responsibility. With the imminent disruption in the transport sector by the introduction of self-driving cars, questions around who bears responsibility for harm caused by a self-driving car comes to mind. Also, there are more technical questions around insurance and liabilities that have to be addressed Chopra and White (2011).
 
5
In much of the literature, Asmivo is unarguably seen as a forerunner in the development of guidelines to regulate the operations of autonomous intelligent systems.
 
6
We might consider, for instance, a desktop printer as a machine but it is uniquely different from a self-driving car, which can also be said to be a machine. The difference here is the degree of autonomy of these systems and the attendant moral burden they carry. The actions of the printer may have a moral impact; an example is if it is used to print documents for whistleblowing activities. On the other hand, a self-driving car appears to carry a greater ethical burden because it is active in the moral decision-making process. As Lumbreras (2017) mentions, the goal of machine ethics is ultimately to ‘endow’ self-governing systems with ethical comportments. In the case above, a desktop printer would not count as ‘self-governing’ but a self-driving car would.
 
7
Putting Moor’s alongside Asaro’s classification, amoral agents are those I have identified as ethical impact agents. Systems with moral significance are represented as implicit moral agents. Explicit moral agents are systems with dynamic moral intelligence that can make moral decisions while employing moral principles explicitly. The final type of moral agent identified by Moore is the full ethical agent, which shares human-like properties.
 
8
In explicating the importance of these criteria, Floridi and Sanders note: “(a) Interactivity means that the agent and its environment (can) act upon each other… (b) Autonomy means that the agent is able to change state without direct response to interaction: it can perform internal transitions to change its state. So an agent must have at least two states. This property imbues an agent with a certain degree of complexity and decoupled-ness from its environment. (c) Adaptability means that the agent’s interactions (can) change the transition rules by which it changes state. This property ensures that an agent might be viewed, at the given LoA, as learning its own mode of operation in a way, which depends critically on its experience” (Floridi and Sanders 2004, p. 7).
 
9
In answering the question of how to go about the embedding of ethical principles into AIS, it behoves machine ethicists to decide on the best approaches to use. So far, three approaches standout, top-down, bottom-up and hybrid. In the top-down approach, an ethical principle is selected and applied in a theoretical form to the AIS using a rule-based method such as Asimov’s three laws of robots (Allen et al. 2005). The bottom-up approach, on the other hand, does not refer to any particular ethical principle; instead, through machine learning, these intelligent systems can learn subsets of ethical principles and over time integrate these into a whole and possibly unique ethical system (Wallach and Allen 2008). Then there is the hybrid approach, which simply is the fusion of the two approaches.
 
10
Parthmore and Whitby make a strong case for why embodiment constitutes an important aspect in the project to build artificial moral agents. This is because embodiment appeals to the human tendency to relate and nurture, and does so regardless of the form these systems come in—biological or synthetic. Usually, we tend to care for things we anthropomorphise.
 
Literatur
Zurück zum Zitat Abney K (2012) Robotics, ethical theory, and metaethics: a guide for the perplexed. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 35–52 Abney K (2012) Robotics, ethical theory, and metaethics: a guide for the perplexed. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 35–52
Zurück zum Zitat Allen C, Wallach W (2012) Moral machines: contradiction in terms or abdication of human responsibility. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 55–68 Allen C, Wallach W (2012) Moral machines: contradiction in terms or abdication of human responsibility. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 55–68
Zurück zum Zitat Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12(3):251–261MATH Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12(3):251–261MATH
Zurück zum Zitat Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155 Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155
Zurück zum Zitat Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17 Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17
Zurück zum Zitat Anderson RE (1992) Social impacts of computing: codes of professional ethics. Soc Sci Comput Rev 10(4):453–469 Anderson RE (1992) Social impacts of computing: codes of professional ethics. Soc Sci Comput Rev 10(4):453–469
Zurück zum Zitat Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15–15 Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15–15
Zurück zum Zitat Anderson M, Anderson S, Armen C (2005) Towards machine ethics: implementing two action-based ethical theories. In Proceedings of the AAAI 2005 Fall Symposium on Machine Ethics, (pp. 1–7). Anderson M, Anderson S, Armen C (2005) Towards machine ethics: implementing two action-based ethical theories. In Proceedings of the AAAI 2005 Fall Symposium on Machine Ethics, (pp. 1–7).
Zurück zum Zitat Anderson M, Anderson SL, Armen C (2006) An approach to computing ethics. IEEE Intell Syst 21(4):56–63 Anderson M, Anderson SL, Armen C (2006) An approach to computing ethics. IEEE Intell Syst 21(4):56–63
Zurück zum Zitat Arnold T, Scheutz M (2016) Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics Inf Technol 18(2):103–115 Arnold T, Scheutz M (2016) Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics Inf Technol 18(2):103–115
Zurück zum Zitat Asaro PM (2006) What should we want from a robot ethic? Int Rev Inf Ethics 6(12):9–16 Asaro PM (2006) What should we want from a robot ethic? Int Rev Inf Ethics 6(12):9–16
Zurück zum Zitat Asaro P (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. Int Rev Red Cross 94(886):687–709 Asaro P (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. Int Rev Red Cross 94(886):687–709
Zurück zum Zitat Asimov I (1950) Runaround. I, robot. Bantam Dell, New York Asimov I (1950) Runaround. I, robot. Bantam Dell, New York
Zurück zum Zitat Baral C, Gelfond M (1994) Logic programming and knowledge representation. J Logic Program 19:73–148MathSciNetMATH Baral C, Gelfond M (1994) Logic programming and knowledge representation. J Logic Program 19:73–148MathSciNetMATH
Zurück zum Zitat Boddington P (2017) Towards a code of ethics for artificial intelligence. Springer, Cham Boddington P (2017) Towards a code of ethics for artificial intelligence. Springer, Cham
Zurück zum Zitat Borenstein J, Pearson Y (2010) Robot caregivers: harbingers of expanded freedom for all? Ethics Inf Technol 12(3):277–288 Borenstein J, Pearson Y (2010) Robot caregivers: harbingers of expanded freedom for all? Ethics Inf Technol 12(3):277–288
Zurück zum Zitat Bostrom N (2003) Ethical issues in advanced artificial intelligence. Sci Fiction Philos Time Travel Superintell 2003:277–284 Bostrom N (2003) Ethical issues in advanced artificial intelligence. Sci Fiction Philos Time Travel Superintell 2003:277–284
Zurück zum Zitat Bostrom N (2016) Ethical issues in advanced artificial intelligence. In: Schneider S (ed) Science fiction and philosophy: from time travel to superintelligence. Wiley, Oxford, pp 277–284 Bostrom N (2016) Ethical issues in advanced artificial intelligence. In: Schneider S (ed) Science fiction and philosophy: from time travel to superintelligence. Wiley, Oxford, pp 277–284
Zurück zum Zitat Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Frankish K, Ramsey WM (eds) The Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge, pp 316–334 Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Frankish K, Ramsey WM (eds) The Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge, pp 316–334
Zurück zum Zitat Boyles RJM (2018) A case for machine ethics in modelling human-level intelligent agents. Kritike: Online J Philos 12(1): 182–200. Boyles RJM (2018) A case for machine ethics in modelling human-level intelligent agents. Kritike: Online J Philos 12(1): 182–200.
Zurück zum Zitat Bozdag E (2013) Bias in algorithmic filtering and personalization. Ethics Inf Technol 15(3):209–227 Bozdag E (2013) Bias in algorithmic filtering and personalization. Ethics Inf Technol 15(3):209–227
Zurück zum Zitat Brundage M (2014) Limitations and risks of machine ethics. J Exp Theor Artif Intell 26(3):355–372 Brundage M (2014) Limitations and risks of machine ethics. J Exp Theor Artif Intell 26(3):355–372
Zurück zum Zitat Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins Publishing Company, Amsterdam, pp 63–74 Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins Publishing Company, Amsterdam, pp 63–74
Zurück zum Zitat Bynum TW (2001) Computer ethics: its birth and its future. Ethics Inf Technol 3(2):109–112 Bynum TW (2001) Computer ethics: its birth and its future. Ethics Inf Technol 3(2):109–112
Zurück zum Zitat Cardon A (2006) Artificial consciousness, artificial emotions, and autonomous robots. Cogn Process 7(4):245–267 Cardon A (2006) Artificial consciousness, artificial emotions, and autonomous robots. Cogn Process 7(4):245–267
Zurück zum Zitat Chella A, Manzotti R (2009) Machine consciousness: a manifesto for robotics. Int J Mach Conscious 1(01):33–51 Chella A, Manzotti R (2009) Machine consciousness: a manifesto for robotics. Int J Mach Conscious 1(01):33–51
Zurück zum Zitat Chella A, Manzotti R (2013) Artificial consciousness. Imprints Academics: Exter, UK Chella A, Manzotti R (2013) Artificial consciousness. Imprints Academics: Exter, UK
Zurück zum Zitat Chopra S (2010) Rights for autonomous artificial agents? Commun ACM 53(8):38–40 Chopra S (2010) Rights for autonomous artificial agents? Commun ACM 53(8):38–40
Zurück zum Zitat Chopra S, White LF (2011) A legal theory for autonomous artificial agents. University of Michigan Press, Michigan Chopra S, White LF (2011) A legal theory for autonomous artificial agents. University of Michigan Press, Michigan
Zurück zum Zitat Chung CA (ed) (2003) Simulation modelling handbook: a practical approach. CRC Press, LondonMATH Chung CA (ed) (2003) Simulation modelling handbook: a practical approach. CRC Press, LondonMATH
Zurück zum Zitat Clarke R (1993) Asimov’s laws of robotics: implications for information technology. Part 1. Computer 26(12):53–61 Clarke R (1993) Asimov’s laws of robotics: implications for information technology. Part 1. Computer 26(12):53–61
Zurück zum Zitat Clarke R (1994) Asimov’s laws of robotics: implications for information technology. Part 2. Computer 27(1):57–66 Clarke R (1994) Asimov’s laws of robotics: implications for information technology. Part 2. Computer 27(1):57–66
Zurück zum Zitat Clowes R, Torrance S, Chrisley R (2007) Machine consciousness. J Conscious Stud 14(7):7–14 Clowes R, Torrance S, Chrisley R (2007) Machine consciousness. J Conscious Stud 14(7):7–14
Zurück zum Zitat Coeckelbergh M (2010a) Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12(3):235–241 Coeckelbergh M (2010a) Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12(3):235–241
Zurück zum Zitat Coeckelbergh M (2010b) Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf Technol 12(3):209–221 Coeckelbergh M (2010b) Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf Technol 12(3):209–221
Zurück zum Zitat Danaher J (2017) The symbolic-consequences argument in the sex robot debate. In: Danaher J, McArthur N (eds) Robot sex: social and ethical implications. MIT Press, Cambridge Danaher J (2017) The symbolic-consequences argument in the sex robot debate. In: Danaher J, McArthur N (eds) Robot sex: social and ethical implications. MIT Press, Cambridge
Zurück zum Zitat Danielson P (2002) Artificial morality: virtuous robots for virtual games. Routledge, London Danielson P (2002) Artificial morality: virtuous robots for virtual games. Routledge, London
Zurück zum Zitat Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (pp. 4691–4697). AAAI Press Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (pp. 4691–4697). AAAI Press
Zurück zum Zitat Faulhaber AK, Dittmer A, Blind F, Wächter MA, Timm S, Sütfeld LR, König P (2019) Human decisions in moral dilemmas are largely described by utilitarianism: virtual car driving study provides guidelines for autonomous driving vehicles. Sci Eng Ethics 25(2):399–418 Faulhaber AK, Dittmer A, Blind F, Wächter MA, Timm S, Sütfeld LR, König P (2019) Human decisions in moral dilemmas are largely described by utilitarianism: virtual car driving study provides guidelines for autonomous driving vehicles. Sci Eng Ethics 25(2):399–418
Zurück zum Zitat Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379 Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379
Zurück zum Zitat Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Schafer B (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707 Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Schafer B (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707
Zurück zum Zitat Forester T, Morrison P (1991) Computer ethics: cautionary tales and ethical dilemmas in computing. Harvard J Law Technol 4(2):299–305 Forester T, Morrison P (1991) Computer ethics: cautionary tales and ethical dilemmas in computing. Harvard J Law Technol 4(2):299–305
Zurück zum Zitat Gamez D (2008) Progress in machine consciousness. Conscious Cogn 17(3):887–910 Gamez D (2008) Progress in machine consciousness. Conscious Cogn 17(3):887–910
Zurück zum Zitat Gershman SJ, Horvitz EJ, Tenenbaum JB (2015) Computational rationality: a converging paradigm for intelligence in brains, minds, and machines. Science 349(6245):273–278MathSciNetMATH Gershman SJ, Horvitz EJ, Tenenbaum JB (2015) Computational rationality: a converging paradigm for intelligence in brains, minds, and machines. Science 349(6245):273–278MathSciNetMATH
Zurück zum Zitat Goodall NJ (2014) Machine ethics and automated vehicles. In: Meyer G, Beiker S (eds) Road vehicle automation. Springer, Cham, pp 93–102 Goodall NJ (2014) Machine ethics and automated vehicles. In: Meyer G, Beiker S (eds) Road vehicle automation. Springer, Cham, pp 93–102
Zurück zum Zitat Grau C (2006) There is no “I” in “robot”: robots and utilitarianism. IEEE Intell Syst 21(4):52–55 Grau C (2006) There is no “I” in “robot”: robots and utilitarianism. IEEE Intell Syst 21(4):52–55
Zurück zum Zitat Grodzinsky FS, Miller KW, Wolf MJ (2008) The ethics of designing artificial agents. Ethics Inf Technol 10(2–3):115–121 Grodzinsky FS, Miller KW, Wolf MJ (2008) The ethics of designing artificial agents. Ethics Inf Technol 10(2–3):115–121
Zurück zum Zitat Hajian S, Bonchi F, Castillo C (2016) Algorithmic bias: from discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125–2126). ACM. Hajian S, Bonchi F, Castillo C (2016) Algorithmic bias: from discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125–2126). ACM.
Zurück zum Zitat Hohfeld WN (1923) Fundamental legal conceptions as applied in judicial reasoning: and other legal essays. Yale University Press, New Haven Hohfeld WN (1923) Fundamental legal conceptions as applied in judicial reasoning: and other legal essays. Yale University Press, New Haven
Zurück zum Zitat Howard D, Muntean I (2016) A minimalist model of the artificial autonomous moral agent (AAMA). In 2016 AAAI Spring Symposium Series. Howard D, Muntean I (2016) A minimalist model of the artificial autonomous moral agent (AAMA). In 2016 AAAI Spring Symposium Series.
Zurück zum Zitat Johnson DG (2004) Computer ethics. In: Floridi L (ed) The Blackwell guide to the philosophy of computing and information. Wiley, Oxford, pp 65–75 Johnson DG (2004) Computer ethics. In: Floridi L (ed) The Blackwell guide to the philosophy of computing and information. Wiley, Oxford, pp 65–75
Zurück zum Zitat Johnson DG, Miller KW (2008) Un-making artificial moral agents. Ethics Inf Technol 10(2–3):123–133 Johnson DG, Miller KW (2008) Un-making artificial moral agents. Ethics Inf Technol 10(2–3):123–133
Zurück zum Zitat Leben D (2017) A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19(2):107–115 Leben D (2017) A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19(2):107–115
Zurück zum Zitat Leben D (2018) Ethics for robots: how to design a moral algorithm. Routledge, Abingdon Leben D (2018) Ethics for robots: how to design a moral algorithm. Routledge, Abingdon
Zurück zum Zitat Levesque HJ (1986) Knowledge representation and reasoning. Ann Rev Comput Sci 1(1):255–287MathSciNet Levesque HJ (1986) Knowledge representation and reasoning. Ann Rev Comput Sci 1(1):255–287MathSciNet
Zurück zum Zitat Lewis RL, Howes A, Singh S (2014) Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics Cognit Sci 6(2):279–311 Lewis RL, Howes A, Singh S (2014) Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics Cognit Sci 6(2):279–311
Zurück zum Zitat Lin P, Abney K, Bekey GA (2012) The ethical and social implications of robotics. MIT Press, Cambridge Lin P, Abney K, Bekey GA (2012) The ethical and social implications of robotics. MIT Press, Cambridge
Zurück zum Zitat Lokhorst GJC (2011) Computational meta-ethics. Minds Mach 21(2):261–274 Lokhorst GJC (2011) Computational meta-ethics. Minds Mach 21(2):261–274
Zurück zum Zitat Lumbreras S (2017) The limits of machine ethics. Religions 8(5) http://doi:10.3390/rel8050100. Lumbreras S (2017) The limits of machine ethics. Religions 8(5) http://​doi:10.3390/rel8050100.
Zurück zum Zitat Mabaso BA (2020) Computationally rational agents can be moral agents. Ethics Inf Technol 24:1–9 Mabaso BA (2020) Computationally rational agents can be moral agents. Ethics Inf Technol 24:1–9
Zurück zum Zitat Malle BF, Scheutz M (2014) Moral competence in social robots. In Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology (p. 8), IEEE Press, Piscataway Malle BF, Scheutz M (2014) Moral competence in social robots. In Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology (p. 8), IEEE Press, Piscataway
Zurück zum Zitat Marino D, Tamburrini G (2006) Learning robots and human responsibility. Int Rev Inf Ethics 6(12):46–51 Marino D, Tamburrini G (2006) Learning robots and human responsibility. Int Rev Inf Ethics 6(12):46–51
Zurück zum Zitat McDermott D (2007) Artificial intelligence and consciousness. In: Zelazo PD, Moscovitch M, Thompson E (eds) The Cambridge handbook of consciousness. Cambridge University Press, Cambridge, pp 117–150 McDermott D (2007) Artificial intelligence and consciousness. In: Zelazo PD, Moscovitch M, Thompson E (eds) The Cambridge handbook of consciousness. Cambridge University Press, Cambridge, pp 117–150
Zurück zum Zitat Moor JH (1985) What is computer ethics? Metaphilosophy 16(4):266–275 Moor JH (1985) What is computer ethics? Metaphilosophy 16(4):266–275
Zurück zum Zitat Moor JH (1995) Is ethics computable? Metaphilosophy 26(1/2):1–21 Moor JH (1995) Is ethics computable? Metaphilosophy 26(1/2):1–21
Zurück zum Zitat Moor JH (2006) The nature, importance and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21 Moor JH (2006) The nature, importance and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21
Zurück zum Zitat Moor J (2009) Four kinds of ethical robots. Philosophy Now 72:12–14 Moor J (2009) Four kinds of ethical robots. Philosophy Now 72:12–14
Zurück zum Zitat Parthemore J, Whitby B (2014) Moral agency, moral responsibility, and artifacts: what existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us. Int J Mach Conscious 6(02):141–161 Parthemore J, Whitby B (2014) Moral agency, moral responsibility, and artifacts: what existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us. Int J Mach Conscious 6(02):141–161
Zurück zum Zitat Powers TM (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51 Powers TM (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51
Zurück zum Zitat Ramey CH (2005) ‘For the sake of others’: The ‘personal’ ethics of human-android interaction. Cognitive Science Society, Stresa, pp 137–148 Ramey CH (2005) ‘For the sake of others’: The ‘personal’ ethics of human-android interaction. Cognitive Science Society, Stresa, pp 137–148
Zurück zum Zitat Reggia JA (2013) The rise of machine consciousness: Studying consciousness with computational models. Neural Networks 44:112–131 Reggia JA (2013) The rise of machine consciousness: Studying consciousness with computational models. Neural Networks 44:112–131
Zurück zum Zitat Rodd MG (1995) Safe AI—is this possible? Eng Appl Artif Intell 8(3):243–250 Rodd MG (1995) Safe AI—is this possible? Eng Appl Artif Intell 8(3):243–250
Zurück zum Zitat Russell S, Hauert S, Altman R, Veloso M (2015) Ethics of artificial intelligence. Nature 521(7553):415–416 Russell S, Hauert S, Altman R, Veloso M (2015) Ethics of artificial intelligence. Nature 521(7553):415–416
Zurück zum Zitat Ruvinsky AI (2007) Computational ethics. In: Quigley M (ed) Encyclopaedia of information ethics and security. IGI Global, Hershey, pp 76–82 Ruvinsky AI (2007) Computational ethics. In: Quigley M (ed) Encyclopaedia of information ethics and security. IGI Global, Hershey, pp 76–82
Zurück zum Zitat Sauer F (2016) Stopping’Killer Robots’: why now is the time to ban autonomous weapons systems. Arms Control Today 46(8):8–13 Sauer F (2016) Stopping’Killer Robots’: why now is the time to ban autonomous weapons systems. Arms Control Today 46(8):8–13
Zurück zum Zitat Shachter RD, Kanal LN, Henrion M, Lemmer JF (eds) (2017) Uncertainty in artificial intelligence 5 (Vol. 10). Elsevier, Amsterdam Shachter RD, Kanal LN, Henrion M, Lemmer JF (eds) (2017) Uncertainty in artificial intelligence 5 (Vol. 10). Elsevier, Amsterdam
Zurück zum Zitat Smith A, Anderson J (2014) AI, Robotics, and the future of jobs. Pew Research Center, p 6. Smith A, Anderson J (2014) AI, Robotics, and the future of jobs. Pew Research Center, p 6.
Zurück zum Zitat Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16:141–161 Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16:141–161
Zurück zum Zitat Starzyk JA, Prasad DK (2011) A computational model of machine consciousness. Int J Mach Conscious 3(02):255–281 Starzyk JA, Prasad DK (2011) A computational model of machine consciousness. Int J Mach Conscious 3(02):255–281
Zurück zum Zitat Sullins JP (2012) Robots, love, and sex: the ethics of building a love machine. IEEE Trans Affect Comput 3(4):398–409 Sullins JP (2012) Robots, love, and sex: the ethics of building a love machine. IEEE Trans Affect Comput 3(4):398–409
Zurück zum Zitat Tavani HT (2002) The uniqueness debate in computer ethics: what exactly is at issue, and why does it matter? Ethics Inf Technol 4(1):37–54 Tavani HT (2002) The uniqueness debate in computer ethics: what exactly is at issue, and why does it matter? Ethics Inf Technol 4(1):37–54
Zurück zum Zitat Torrance S (2008) Ethics and consciousness in artificial agents. AI Soc 22(4):495–521 Torrance S (2008) Ethics and consciousness in artificial agents. AI Soc 22(4):495–521
Zurück zum Zitat Torrance S (2013) Artificial agents and the expanding ethical circle. AI Soc 28(4):399–414 Torrance S (2013) Artificial agents and the expanding ethical circle. AI Soc 28(4):399–414
Zurück zum Zitat Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24(3):251 Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24(3):251
Zurück zum Zitat Van de Voort M, Pieters W, Consoli L (2015) Refining the ethics of computer-made decisions: a classification of moral mediation by ubiquitous machines. Ethics Inf Technol 17(1):41–56 Van de Voort M, Pieters W, Consoli L (2015) Refining the ethics of computer-made decisions: a classification of moral mediation by ubiquitous machines. Ethics Inf Technol 17(1):41–56
Zurück zum Zitat Van den Hoven J (2010) The use of normative theories in computer ethics. In: Floridi L (ed) The Cambridge handbook of information and computer ethics. Cambridge University Press, Cambridge, pp 59–76 Van den Hoven J (2010) The use of normative theories in computer ethics. In: Floridi L (ed) The Cambridge handbook of information and computer ethics. Cambridge University Press, Cambridge, pp 59–76
Zurück zum Zitat Veruggio G, Operto F (2006) Roboethics: a bottom-up interdisciplinary discourse in the field of applied ethics in robotics. Int Rev Inf Ethics 6(12):2–8 Veruggio G, Operto F (2006) Roboethics: a bottom-up interdisciplinary discourse in the field of applied ethics in robotics. Int Rev Inf Ethics 6(12):2–8
Zurück zum Zitat Waldrop MM (1987) A question of responsibility. AI Mag 8(1):28–28 Waldrop MM (1987) A question of responsibility. AI Mag 8(1):28–28
Zurück zum Zitat Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford
Zurück zum Zitat Wallach W, Asaro P (2017) Machine ethics and robot ethics. Routledge, New York Wallach W, Asaro P (2017) Machine ethics and robot ethics. Routledge, New York
Zurück zum Zitat Wallach W, Franklin S, Allen C (2010) A conceptual and computational model of moral decision making in human and artificial agents. Topics Cognit Sci 2(3):454–485 Wallach W, Franklin S, Allen C (2010) A conceptual and computational model of moral decision making in human and artificial agents. Topics Cognit Sci 2(3):454–485
Zurück zum Zitat Yampolskiy RV (2012) Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In: Müller VC (ed) Philosophy and theory of artificial intelligence. Springer, Berlin, pp 389–396 Yampolskiy RV (2012) Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In: Müller VC (ed) Philosophy and theory of artificial intelligence. Springer, Berlin, pp 389–396
Metadaten
Titel
From machine ethics to computational ethics
verfasst von
Samuel T. Segun
Publikationsdatum
29.06.2020
Verlag
Springer London
Erschienen in
AI & SOCIETY / Ausgabe 1/2021
Print ISSN: 0951-5666
Elektronische ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-020-01010-1

Weitere Artikel der Ausgabe 1/2021

AI & SOCIETY 1/2021 Zur Ausgabe

Premium Partner