Skip to main content
Erschienen in: Ethics and Information Technology 2/2016

Open Access 05.05.2016 | Original Paper

AI assisted ethics

verfasst von: Amitai Etzioni, Oren Etzioni

Erschienen in: Ethics and Information Technology | Ausgabe 2/2016

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
Stuart Russell of University of California Berkeley stated, “You would want [a robot that does everyday activities] preloaded with a pretty good set of values” (Goldhill 2015).
 
2
Philosophers Philippa Foot and Judith Jarvis Thomson described a situation called “the Trolley Problem,” which raises the question whether a runaway train is about to run over a group of five people on the tracks, but their deaths could be averted by flipping a switch that would divert the train onto another track, where it would kill one person (Lin 2013).
Joshua D. Greene describes a number of ethical dilemmas that generally fit into the category of the “trolley problem.” These include “switch” cases, in which throwing a switch will turn the trolley away from some number of people toward a single person, or “footbridge” cases in which one must push a person into the path of the trolley to save others’ lives. He also discusses similar famous ethical questions studied by researchers, such as the question whether it is immoral to allow a child to drown in a shallow pond to avoid muddying one’s clothes, whether one is morally obligated to donate money to save others’ lives, and more (Greene 2014).
Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan apply this question to the issue of driverless cars; as driverless cars (“autonomous vehicles,” or AVs) and other forms of use of artificial intelligence become more widespread. They examined whether individuals would be comfortable with AVs programmed to be utilitarian and found that the answer was generally yes (Bonnefon et al. 2015).
 
3
One scholar at the National Science Foundation points out that technology currently outstrips knowledge of how to assign liability for robots’ ethical and legal failures (The Economist 2014).
Professor Patrick Lin points out that algorithms cannot make “an instinctive but nonetheless bad split-second decision” the way humans can, and thus the threshold for liability may be higher (Lin 2013).
 
4
Slobogin and Schumacher (1993: 757) recommend that the Supreme Court draw on public opinion polls to determine that about societal expectations of privacy. Similar suggestions were made by Fradella, Morrow, Fischer, and Ireland. They conducted a survey of 589 individuals (Fradella et al. 2010–2011: 293–94).
Bonnefon et al. (2015) applied this idea to finding which values ought to guide self driving cars.
 
5
For example, Nielsen has developed a marketing system for targeting very specific demographics with financial and investment products based on age, affluence, the presence of children in the home, and certain purchasing habits. These include such specific target consumer groups as “Y2-54: City Strivers” and “F4-56: Economizers” (Nielsen 2015); Ted Cruz’ campaign in Iowa relied on psychological profiles to determine the best ways to canvass individual voters in the state (Hamburger 2015).
 
6
Lexicographic preferences are those in which “respondents have a ranking of the attributes [consider important], but their choice of an alternative is based solely on the level of their most important attribute(s)” (Campbell et al. 2006).
 
7
“Lifeboat ethics” refers to an ethical dilemma outlined by Garrett Hardin in 1974, which describes a situation in which a lifeboat nearly full to capacity must consider whether or not to bring aboard ten additional passengers out of 100 people in the water. The purpose of lifeboat ethics-style philosophical discussions is not to tell anyone what is ethically correct in any given situation, but rather to help individuals to clarify their own values.
This is one component of a larger school of ethics, “moral reasoning.” Moral reasoning encourages “individual or collective practical reasoning about what, morally, one ought to do” (Richardson 2014).
 
8
Across many different situations, it is well-established that “attitudes are poor predictors of behavior” (Ajzen and Fishbein 2005). See also: Azjen et al. 2004.
 
9
For an in-depth discussion of the different treatment afforded to less and more sensitive information, see: [redacted].
 
10
See id.
 
Literatur
Zurück zum Zitat Ajzen, I., & Fishbein, M. (2005). The influence of attitudes on behavior. In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), The handbook of attitudes. Mahwah, NJ: Lawrence Erlbaum Associates Publishers. Ajzen, I., & Fishbein, M. (2005). The influence of attitudes on behavior. In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), The handbook of attitudes. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
Zurück zum Zitat Azjen, I., Brown, T. C., & Carvajal, F. (2004). Explaining the discrepancy between intentions and actions: The case of hypothetical bias in contingent valuation. Personality and Social Psychology Bulletin, 30(9), 1108–1121.CrossRef Azjen, I., Brown, T. C., & Carvajal, F. (2004). Explaining the discrepancy between intentions and actions: The case of hypothetical bias in contingent valuation. Personality and Social Psychology Bulletin, 30(9), 1108–1121.CrossRef
Zurück zum Zitat Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87.CrossRef Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87.CrossRef
Zurück zum Zitat Etzioni, A. (1988) The moral dimension. New York: The Free Press. Etzioni, A. (1988) The moral dimension. New York: The Free Press.
Zurück zum Zitat Etzioni, A., & Etzioni, O. (2016). Keeping AI Legal. Vanderbilt Journal of Entertainment & Technology Law (Forthcoming). Etzioni, A., & Etzioni, O. (2016). Keeping AI Legal. Vanderbilt Journal of Entertainment & Technology Law (Forthcoming).
Zurück zum Zitat Fleischer, P. (2015). Privacy and future challenges. Speech, Amsterdam Privacy Conference. Amsterdam, The Netherlands, October 23–26. Fleischer, P. (2015). Privacy and future challenges. Speech, Amsterdam Privacy Conference. Amsterdam, The Netherlands, October 23–26.
Zurück zum Zitat Fradella, H. F, et al. (2010–2011). Quantifying Katz: Empirically measuring ‘Reasonable Expectations of Privacy’ in the fourth amendment context. American Journal of Criminal Law 38, 289–373. Fradella, H. F, et al. (2010–2011). Quantifying Katz: Empirically measuring ‘Reasonable Expectations of Privacy’ in the fourth amendment context. American Journal of Criminal Law 38, 289–373.
Zurück zum Zitat Greene, J. D. (2014). Beyond point-and-shoot morality: Why cognitive (neuro) science matters for ethics. Ethics, 124(4), 695–726.MathSciNetCrossRef Greene, J. D. (2014). Beyond point-and-shoot morality: Why cognitive (neuro) science matters for ethics. Ethics, 124(4), 695–726.MathSciNetCrossRef
Zurück zum Zitat Hardin, G. (1974). Lifeboat ethics: The case against helping the poor. Psychology Today. Hardin, G. (1974). Lifeboat ethics: The case against helping the poor. Psychology Today.
Zurück zum Zitat Homer. (1978) Odyssey (J. H. Finley, Jr. Trans.). Boston: Harvard University Press. Homer. (1978) Odyssey (J. H. Finley, Jr. Trans.). Boston: Harvard University Press.
Zurück zum Zitat Jacobellis v. Ohio. (1964). 378 U.S. 184. Jacobellis v. Ohio. (1964). 378 U.S. 184.
Zurück zum Zitat Kahneman, D. (2011). Thinking, fast and slow. New York: Firrar, Straus, and Giroux. Kahneman, D. (2011). Thinking, fast and slow. New York: Firrar, Straus, and Giroux.
Zurück zum Zitat Lohr, S. (2015). Data-ism: The revolution transforming decision making, consumer behavior, and almost everything else. London: OneWorld Publications. Lohr, S. (2015). Data-ism: The revolution transforming decision making, consumer behavior, and almost everything else. London: OneWorld Publications.
Zurück zum Zitat Marcus, G. (2012). Moral machines. New York: The New Yorker. Marcus, G. (2012). Moral machines. New York: The New Yorker.
Zurück zum Zitat Mayer-Schönberger, V., & Cukier, K. (2014). Big data. New York: Houghton Mifflin Harcourt. Mayer-Schönberger, V., & Cukier, K. (2014). Big data. New York: Houghton Mifflin Harcourt.
Zurück zum Zitat Slobogin, C., & Schumacher, J. E. (1993). Reasonable expectations of privacy and autonomy in fourth amendment cases: An empirical look at understandings recognized and permitted by society. Duke Law Journal, 42, 727–775.CrossRef Slobogin, C., & Schumacher, J. E. (1993). Reasonable expectations of privacy and autonomy in fourth amendment cases: An empirical look at understandings recognized and permitted by society. Duke Law Journal, 42, 727–775.CrossRef
Zurück zum Zitat The Economist. (2014). That thou art mindful of him. March 29. The Economist. (2014). That thou art mindful of him. March 29.
Zurück zum Zitat Walzer, M. (1984). Spheres of Justice: A defense of pluralism and Equality. New York: Basic books. Walzer, M. (1984). Spheres of Justice: A defense of pluralism and Equality. New York: Basic books.
Zurück zum Zitat Wrong, D. (1995). The problem of order: What unites and divides society. Boston: Harvard University Press. Wrong, D. (1995). The problem of order: What unites and divides society. Boston: Harvard University Press.
Metadaten
Titel
AI assisted ethics
verfasst von
Amitai Etzioni
Oren Etzioni
Publikationsdatum
05.05.2016
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 2/2016
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-016-9400-6

Weitere Artikel der Ausgabe 2/2016

Ethics and Information Technology 2/2016 Zur Ausgabe

Premium Partner