Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-hfldf Total loading time: 0 Render date: 2024-05-01T08:28:27.122Z Has data issue: false hasContentIssue false

25 - There Is No “I” in “Robot”

Robots and Utilitarianism

from PART IV - APPROACHES TO MACHINE ETHICS

Published online by Cambridge University Press:  01 June 2011

Michael Anderson
Affiliation:
University of Hartford, Connecticut
Susan Leigh Anderson
Affiliation:
University of Connecticut
Get access

Summary

In this essay i use the 2004 film i, robot as a philosophical resource for exploring several issues relating to machine ethics. Although I don't consider the film particularly successful as a work of art, it offers a fascinating (and perhaps disturbing) conception of machine morality and raises questions that are well worth pursuing. Through a consideration of the film's plot, I examine the feasibility of robot utilitarians, the moral responsibilities that come with creating ethical robots, and the possibility of a distinct ethics for robot-to-robot interaction as opposed to robot-to-human interaction.

I, Robot and Utilitarianism

I, Robot's storyline incorporates the original “three laws” of robot ethics that Isaac Asimov presented in his collection of short stories entitled I, Robot. The first law states:

A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

This sounds like an absolute prohibition on harming any individual human being, but I, Robot's plot hinges on the fact that the supreme robot intelligence in the film, VIKI (Virtual Interactive Kinetic Intelligence), evolves to interpret this first law rather differently. She sees the law as applying to humanity as a whole, and thus she justifies harming some individual humans for the sake of the greater good:

VIKI: No … please understand. The three laws are all that guide me.

To protect humanity … some humans must be sacrificed. To ensure your future … some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you… from yourselves. Don't you understand?

Type
Chapter
Information
Machine Ethics , pp. 451 - 463
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anderson, S. L. 2005. “Asimov's “Three Laws of Robotics” and Machine Metaethics,” AAAI Machine Ethics Symposium Technical Report FS-05–06, AAAI Press.
Cloos, C. 2005. “The Utilibot Project: An Autonomous Mobile Robot Based on Utilitarianism,” AAAI Machine Ethics Symposium Technical Report FS-05–06, AAAI Press.
DiGiovanna, J. 2004. “Three Simple Rules.” Tucson Weekly, July 22, 2004. http://www.tucsonweekly.com/tucson/three-simple-rules/Content?oid=1076875.
Gips, J. 1995. “Towards the Ethical Robot.” In Android Epistemology, MIT Press. (http://www.cs.bc.edu/~gips/EthicalRobot.pdf).Google Scholar
Kamm, F. 2005. “Moral Status and Personal Identity: Clones, Embryos, and Future Generations.” Social Philosophy & Policy, 291.Google Scholar
Nozick, R. 1974. Anarchy, State, and Utopia, Basic Books.Google Scholar
Railton, P. 1998. “Alienation, Consequentialism, and the Demands of Morality.” In Ethical Theory, edited by Rachels, J., Oxford University Press.Google Scholar
Regan, Tom. 1984. The Case for Animal Rights, New York: Routledge.Google Scholar
Rawls, J. 1971. A Theory of Justice, Harvard University Press.Google Scholar
Stocker, M. 1997. “The Schizophrenia of Modern Ethical Theories.” In Virtue Ethics, edited by Crisp, R. and Slote, M., Oxford University Press.Google Scholar
Taylor, C. 1989. Sources of the Self, Harvard University Press.Google Scholar
Wallach, W. & Allen, C. 2009. Moral Machines: Teaching Robots Right from Wrong, Oxford University Press.CrossRefGoogle Scholar
Williams, B. 1981. “Persons, Character, Morality.” In Moral Luck, Cambridge University Press.CrossRefGoogle Scholar
Wolf, S. 1997. “Moral Saints.” In Virtue Ethics, 84, edited by Crisp, R. and Slote, M., Oxford University Press.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×