Reference Hub13
The Functional Morality of Robots

The Functional Morality of Robots

Linda Johansson
Copyright: © 2010 |Volume: 1 |Issue: 4 |Pages: 9
ISSN: 1947-3451|EISSN: 1947-346X|EISBN13: 9781613502181|DOI: 10.4018/jte.2010100105
Cite Article Cite Article

MLA

Johansson, Linda. "The Functional Morality of Robots." IJT vol.1, no.4 2010: pp.65-73. http://doi.org/10.4018/jte.2010100105

APA

Johansson, L. (2010). The Functional Morality of Robots. International Journal of Technoethics (IJT), 1(4), 65-73. http://doi.org/10.4018/jte.2010100105

Chicago

Johansson, Linda. "The Functional Morality of Robots," International Journal of Technoethics (IJT) 1, no.4: 65-73. http://doi.org/10.4018/jte.2010100105

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.