Skip to main content
Erschienen in: Human Rights Review 2/2020

01.02.2020

Human Rights of Users of Humanlike Care Automata

verfasst von: Lantz Fleming Miller

Erschienen in: Human Rights Review | Ausgabe 2/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Care is more than dispensing pills or cleaning beds. It is about responding to the entire patient. What is called “bedside manner” in medical personnel is a quality of treating the patient not as a mechanism but as a being—much like the caregiver—with desires, ideas, dreams, aspirations, and the gamut of mental and emotional character. As automata, answering an increasing functional need in care, are designed to enact care, the pressure is on their becoming more humanlike to carry out the function more effectively. The question becomes not merely whether the care automaton can effect good bedside manner but whether the patient’s response is not feeling deceived by the humanlikeness. It seems the device must be designed either to effect explicit mere human-“likeness,” thus likely undermining its bedside-manner potential, or to fool the patient completely. Neither option is attractive. This article examines the social problems of designing maximally humanlike care automata and how problems may start to erode the human rights of users/patients. The article then investigates the alternatives for dealing with this problem, whether by industrial and professional self-regulation or public-policy initiatives. It then frames the problem in the broader historical perspective in terms of previous bans, moratoria, and other means of control of hazardous and potentially rights-violating techniques and materials.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
I have; for the most part, used the term “automaton” instead of “robot.” The reason is in the connotation of the latter term. It was coined by the Czech playwright Karel Čapek in his 1920 play R.U.R., in which “robots” were created as slaves. I remain surprised by how researchers continue to use this term with its ethically explosive and tacit connotation of slavishness. I find “automaton” muh more neutral. I do, however, use the term “robotics” for the professional designation of the relevant R&D, as I have not yet found a parallel term that uses “automaton” as its root.
 
2
For Ash-like mechanical beings, see Hanson 2011; for various kinds of cyborgism, see Hughes 2004, Maxey 2013.
 
3
Constructing fully biological humanlike robots seems to pose more blatant ethical problems (of experimentation and experimental development, for example), which respond more readily to ethical assessment. I also leave out discussion of ethical and policy issues arising from the third type, cyborg construction, because of the enormity of problems that their middle ground invokes.
 
4
See Miller 2015 for assessment of many of these works, particularly those assuming that tools should not be treated as objects created for a purpose.
 
5
For more information on the definition, characterization, and investigation of maximally humanlike automata, please see Miller 2015, 2017.
 
6
This openness does not deny the uncanny valley phenomenon (see subsection labeled “Four: The Mori Effect” below), by which there is a phase of technology development whereby an automaton’s humanlikeness repulses the user, but when the automaton becomes extremely humanlike, the effect starts to reverse. See Ferrari et al. 2016.
 
7
One may object here that as techniques become more sophisticated and humanlike, there may be less ethical worry for care automata because the machines will be better made, more competent in handling the cared-for, less prone to error, and more ethical themselves, In fact, they will be as loving as any human can be, as conscious, properly emotive, companionable, and kind. Thus, the machines would need only face the same ethical and legal constraints as any human. However, as seen later in this article, this increasing humanlikeness still brings in ethical and legal issues for Homo sapiens societies over and above those that Homo sapiens must face, if machines are created for a specific purpose, for which humans and their societies were not created (if societies—contrasted with polities—are created for a purpose). I believe this caveat will become clearer in the course of the article.
 
8
For the ease of discussion, assume that the elderly person is not, say, a science-fiction fan who longs to live in a highly technologized, fantastical future, but is of more mainstream tastes. Studies have begun into the degree to which cared-for elderly would welcome automaton care (at the current technical level) (Van der Plas et al. 2010), and there is plenty room for further such research.
 
9
Hanson (2011) makes an art-for-art’s-sake case for MHAs, while adding that via such development, scientists can have a means by which to understand the human being in ways not possible without such automata research.
 
10
A practical issue then arises from the possibility that a high percentage of elderly under care may not want automata, thus diminishing the program’s economic justification. I bypass this side issue here.
 
11
Even if the cared for were sufficiently apprised of the caregiver’s ontology and still acceded to the automaton caregiver, a deception would be involved, although the cared-for would not disallow this deception.
 
12
Coeckelbergh (2012) covers a slightly different angle concerning emotional deception in automata and is worth further attention. While he is considering automata with built-in emotional responses that may merely appear to the user/patient as bona fide emotion when it may not be, this article is concerned about the emotional responses in the user/patient that are induced by the automaton’s overall biological and behavioral humanlikeness.
 
13
This statement need not counter notions of technological mediation as proposed in post-phenomenological works on philosophy of technology, contending that proper understanding of a technology cannot be had by looking at the purposes and motivations behind a design, but rather through how the object mediates the person and the world. (See Verbeek 2005, Ihde 2009.) The point here is that the designers and manufacturers commonly fashion a design to—in their minds—facilitate a human activity.
 
14
To risk a détour into the matter of representation, to evidence further the objection’s problem: A representation is considered not the same as the thing represented. Furthermore, a person does not; in daily life, represent oneself (outside a possible parlor game or theater-arts exercise): One is simply oneself. While the film or novel character may represent an actual person, say Julius Caesar, or depict a fictional character, such as Hamlet, the MHA does not represent either a human being or itself: It simply is itself. If the MHA creators fashioned an MHA representation of Abraham Lincoln and promoted it as being that historical figure, then either the viewer would see it as in fact a representation or be voluntarily duped. An MHA that is not an intended representation of an actual person would, again, just be what it is. Further; it is not clear that one may construct an MHA representation of a typical human being. Instead, one may construct an MHA individual that is its own individual; and as individual, it does not represent itself. However, if someone tries to pass off that MHA as a human being, knowing full well it is not, then one deceives. And if one does so for the sake of somehow manipulating the duped person, the manipulator is defrauding that person. A publisher or movie distributor is not commonly out to make an audience believe the characters are the historical person for real but are at best representations of a historical figure or depictions of a fictional one. If they are not being deceitful, neither is the reader or viewer voluntarily being deceived (what with deception’s heavy moral undertow) in momentarily suspending belief.
 
15
I discount for now the possibility that some persons in elderly care would be fascinated by having automata caregivers than human.
 
16
There may be exceptions to such social organization, such as some ethnographically reported Caribou Aleut groups (Burch and Csonka 1999), which seemed to be organized in societies running up to the hundreds in population but spread over large territories. Arctic cultures, given their harsh environments, tended toward inegalitarian, hierarchical organization and may have tended, more than other types of foraging cultures, toward infanticide and geronticide (Lee and Daly 1999).
 
17
The correlation is indicated as indirect because, as the National Cancer Institute’s literature observes, studies about direct link between cancer and stress are unclear; but more clearly, high stress often leads to patients’ turning to carcinogens such as tobacco or high alcohol intake, thus confirming an indirect link.
 
18
Ironically, these values likely came to be valued in their origin for the sake of serving life in and of itself, although finally taking priority over life itself.
 
19
Before proceeding with rights and policy, I should mention companion automata, for friendship or romance, only highlighting the rights issues here. Hauskeller (2014) and Miller (2016) inquire whether these devices invoke human-rights issues. Certainly the United Nations Declaration of Human Rights (UNDHR, 1948) strongly justifies a right to choose whom one wants as a companion. There are problems, though, with invoking a right to an MHA companion. It remains unclear whether one has a right, tout court, to something that does not exist and may at best only conceivably exist. (The same problem arises for whether one has a right to an MHA caregiver.) No companion MHA exists. It would be absurd to insist that in general we have rights to things that do not exist, as there are a potentially infinite, nonexistent ways to ensure, say, health, and it is beyond the call of rights for every person to deserve these infinite nonexistent measures. During the interval from now till the day of successful MTT and MHA, there is no valid rights claim to such a nonexistent object. It is also not clear whether one has a valid rights claim to bring such an entity into existence and deploy the particular kind of techniques required. I have contended that caregiver MHAs could bring to the surface and instantiate values that threaten human rights. Deployment of these MHA techniques could thereby engender rights violations. The problem would be that the techniques for the care MHA would be so similar to those for companion MHA, that manufacturing the latter type would be tantamount to enabling deployment of the former, which I contend could be deleterious for the rights of the cared-for. (See next section.)
Furthermore, empirical evidence indicates there are other threats to the cared-for and by extension users of MHA companions. A study by Ferrari et al. (2016) involving psychological tests found that the more humanlike the automaton, the more subjects felt their human distinctiveness threatened, with “androids”—those exactly resembling humans—evoking the strongest repulsion. Furthermore, if such threat could be considered as a loss of security, an MHA could be further construed as offering yet another type of threat to human rights. (At least in terms of resemblance; as Ferrari et al.’s two studies were based on physical appearance, not behavior.) Certainly, there could be a factor of humans’ becoming habituated to the presence of more and more humanlike automata over the years, so they would feel less threatened. (See Vallor 2016 and discussion of the Mori effect above.) But it is unclear why designers and manufacturers would feel compelled to keep teasing (to put it lightly) or baiting human responses over the years, just in case humans were to habituate. Rodogno (2016) raises a further factor for MHA companion proposals, bringing out how a type of action viewed only in isolation may not evidence any apparent moral problem. But once an action is undertaken by massive numbers of agents, a moral problem can start to arise. He provides the example of the automobile, which in 1900, with so few around, could evoke scant moral concern. But over a century later, with upwards of a billion cars in operation and the massive impact on health and the environment, moral concerns spring up. One or two companion MHA may pose no evident moral concern; billions of them in use could.
A third problem with the case for companion MHA is enigmatic. It is hard to account for just what “consent” of the automaton would amount to in practical terms (Miller, 2017). If the user buys the MHA from a supplier who programs it to ensure giving informed consent to—or even falling in love with—the buyer, then the question of obtaining informed consent is moot; the machine has no choice at all but is made to agree to the buyer’s advances. However, further, abusing the machine also seems questionable, even if programmed to accept abuse (Fox 2018): The problem of what, in a human/MHA relationship, consent would consist in points up how obfuscating and unconvincing it is to defend, so prematurely, potential users’ having a prima facie right to a companion MHA.
Mechanical military MHA are another conceivable type of MHA, although for what purpose the military should make an MHA is not currently clear. (See Maxey 2013.)
 
20
There are some differences here between basic rights and what I called “foundational” rights. I suggest that rights to life and liberty are even more basic than his basic ones, even by his definition, as one cannot enjoy the right to assembly or expression unless one is alive and free to attend the assembly or write articles or to eat and have property (even if in oneself) to be protected.
 
21
Alternatively, it is possible that machines such as MHA themselves should be granted some rights, though not the full panoply of human rights. As Darling (2012) and other writers point out, there could be good socio-psychological case why allowing an owner to an MHA to abuse it may in itself create more serious moral ramifications. (One idea about the moral concern here is much like that in the early years of animal-welfare protectionism, when it was observed that abusing animals is a good route to abusing humans. See also Rollin 1989.) If the device is indeed sentient, it would be immoral to abuse it. To sum up a topic that deserves an entire article, see Miller 2017).
 
22
Fissile substances such as uranium-238, nuclear-construction materials, and nuclear reactors, as well as smallpox and other dangerous microbes, are controlled by national and international bodies, partly to protect the global human right to security, life, and health. These are materials and techniques designated as not fit for private, everyday use; and one would be hard-put to contend that private citizens’ rights to possess these materials overwhelm the basic right to life and health. Rights to do what one does in private are limited if that activity must entail uses of materials and techniques that may endanger others’ rights.
 
23
One problem with the MTT is that, even if the tested automaton passed, the machine may still not be conscious, intelligent, rational, or even sentient but only well-imitating behaviors that in humans we believe are driven by such traits. The idea of the original Turing Test was that if the machine uses language so well that only (artificial) intelligence equal to or surpassing a human’s could explain the result. However, this linguistic-only test merely begs the question of what intelligence consists in and the degree to which language-use reflects it. The MTT is supposed to help eliminate “cheating” by mere imitation, but perhaps even sentience and consciousness may be merely imitated.
 
24
Even if these forces have grown more formidable in recent years, if it is indeed morally right to seek policy control, counterforces should not spell a reason to forsake policymaking. Nevertheless, the appearance of groups such as the Future of Life indicate how many eminent persons are coming to see the need for oversight of such new types of powerful techniques.
 
25
One further ethics issue to consider: If automata are economically more efficient than human caregivers but the cared-for person is repulsed by an automaton caregiver, would this person be penalized for having “expensive tastes”?
 
Literatur
Zurück zum Zitat Baars, Jan. (2012) Aging and the art of living. Baltimore: Johns Hopkins University Press. Baars, Jan. (2012) Aging and the art of living. Baltimore: Johns Hopkins University Press.
Zurück zum Zitat Baylis, F. (2004) Canada bans human cloning (In brief). Hastings Center Rep 34(3):5. Baylis, F. (2004) Canada bans human cloning (In brief). Hastings Center Rep 34(3):5.
Zurück zum Zitat Boehme, C. (1999) Hierarchy in the forest: The evolution of egalitarian behavior. Cambridge: Harvard University Press. Boehme, C. (1999) Hierarchy in the forest: The evolution of egalitarian behavior. Cambridge: Harvard University Press.
Zurück zum Zitat Burch, E. S., & Csonka, Y. (1999) The Caribou Inuit. In Lee, Richard B. & Daly, Richard (Eds.), The Cambridge Encyclopedia of Hunters and Gatherers (pp. 56–60). Cambridge: Cambridge University Press. Burch, E. S., & Csonka, Y. (1999) The Caribou Inuit. In Lee, Richard B. & Daly, Richard (Eds.), The Cambridge Encyclopedia of Hunters and Gatherers (pp. 56–60). Cambridge: Cambridge University Press.
Zurück zum Zitat Calkins, M. (2011). King Car and the Ethics of Automobile Proponents’ Strategies in China and India. New York: Nova Science Publishers. Calkins, M. (2011). King Car and the Ethics of Automobile Proponents’ Strategies in China and India. New York: Nova Science Publishers.
Zurück zum Zitat Carr, A. (2008) Is business bluffing ethical? In Donaldson, Thomas & Werhane, P. H. (Eds.), Ethical Issues in Business: A Philosophical Approach (pp. 136–142). Upper Saddle River, NJ: Pearson Prentice Hall. Reprinted from Harvard Bus Rev (1968), January/February. Carr, A. (2008) Is business bluffing ethical? In Donaldson, Thomas & Werhane, P. H. (Eds.), Ethical Issues in Business: A Philosophical Approach (pp. 136–142). Upper Saddle River, NJ: Pearson Prentice Hall. Reprinted from Harvard Bus Rev (1968), January/February.
Zurück zum Zitat Coeckelbergh, M. (2012). Are emotional robots deceptive? Transactions on Affective Computing 3(4):388–393. Coeckelbergh, M. (2012). Are emotional robots deceptive? Transactions on Affective Computing 3(4):388–393.
Zurück zum Zitat Connor, J., & Mazanov, J. (2009) Would you dope? A general population test of the Goldman Dilemma. British J Sports Med 43:871–872. Connor, J., & Mazanov, J. (2009) Would you dope? A general population test of the Goldman Dilemma. British J Sports Med 43:871–872.
Zurück zum Zitat Ekman, P. (Ed.) (2006). Darwin and facial expression: A century of research in review. Cambridge: Malor Books. Ekman, P. (Ed.) (2006). Darwin and facial expression: A century of research in review. Cambridge: Malor Books.
Zurück zum Zitat Ekman, P., Friesen, W.V., O'Sullivan, M., Chan, A. et al. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. J Personality & Soc Psych 53 (4):712–717. Ekman, P., Friesen, W.V., O'Sullivan, M., Chan, A. et al. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. J Personality & Soc Psych 53 (4):712–717.
Zurück zum Zitat Endicott, K. L. (1999) Gender relations in hunter-gatherer societies. In Lee, Richard B. and Daly, Richard (Eds.), The Cambridge encyclopedia of hunters and gatherers (pp. 411–418). Cambridge: Cambridge University Press. Endicott, K. L. (1999) Gender relations in hunter-gatherer societies. In Lee, Richard B. and Daly, Richard (Eds.), The Cambridge encyclopedia of hunters and gatherers (pp. 411–418). Cambridge: Cambridge University Press.
Zurück zum Zitat England, P., Budig, M., & Folbre, N. (2002) Wages of virtue: The relative pay of care work, Soc Problems49(4):455–473. England, P., Budig, M., & Folbre, N. (2002) Wages of virtue: The relative pay of care work, Soc Problems49(4):455473.
Zurück zum Zitat Ferrari, F, Paladino, M. P., & Jetten, J. (2016) Blurring human–machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. Inter J Soc Robotics 8:287–302. Ferrari, F, Paladino, M. P., & Jetten, J. (2016) Blurring human–machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. Inter J Soc Robotics 8:287–302.
Zurück zum Zitat Flannery, K., and Marcus, J. (2012) The creation of inequality: How our prehistoric ancestors set the stage for monarchy, slavery, and empire. Cambridge: Harvard University Press. Flannery, K., and Marcus, J. (2012) The creation of inequality: How our prehistoric ancestors set the stage for monarchy, slavery, and empire. Cambridge: Harvard University Press.
Zurück zum Zitat Fox, A. Q. (2018) On empathy and alterity: How sex robots encourage us to reconfigure moral status. Masters Thesis. University of Twente, Enschede, Netherlands Fox, A. Q. (2018) On empathy and alterity: How sex robots encourage us to reconfigure moral status. Masters Thesis. University of Twente, Enschede, Netherlands
Zurück zum Zitat Friedman, M. (1970) The social responsibility of business is to increase its profits. New York Times Mag September 13: 32–33, 122–124. Friedman, M. (1970) The social responsibility of business is to increase its profits. New York Times Mag September 13: 32–33, 122–124.
Zurück zum Zitat Galliot, J. (2016) Military robots: Mapping the moral landscape. Oxon: Routledge. Galliot, J. (2016) Military robots: Mapping the moral landscape. Oxon: Routledge.
Zurück zum Zitat Gertz, N. (2014) The philosophy of war and exile. Houndmills: Palgrave Macmillan. Gertz, N. (2014) The philosophy of war and exile. Houndmills: Palgrave Macmillan.
Zurück zum Zitat Gunkel (2012) The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press. Gunkel (2012) The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press.
Zurück zum Zitat Hauskeller, M. (2014) Sex and the posthuman condition. Hampshire: Palgrave Macmillan. Hauskeller, M. (2014) Sex and the posthuman condition. Hampshire: Palgrave Macmillan.
Zurück zum Zitat Held, V. (2006) The ethics of care: Personal, political and global. Oxford: Oxford University Press. Held, V. (2006) The ethics of care: Personal, political and global. Oxford: Oxford University Press.
Zurück zum Zitat Hughes, J. (2004) Citizen Cyborg: Why democratic societies must respond to the redesigned human of the future. Boulder: Westview Press. Hughes, J. (2004) Citizen Cyborg: Why democratic societies must respond to the redesigned human of the future. Boulder: Westview Press.
Zurück zum Zitat Ihde, D. (2009) Postphenomenology and technoscience: The Peking University Lectures. Albany: State University of New York Press Ihde, D. (2009) Postphenomenology and technoscience: The Peking University Lectures. Albany: State University of New York Press
Zurück zum Zitat Ingold, T. (1999) On the social relations of the hunter-gatherer band.” In Lee, Richard B. & Daly, R (Eds.), The Cambridge encyclopedia of hunters and gatherers (pp. 399–410). Cambridge: Cambridge. Ingold, T. (1999) On the social relations of the hunter-gatherer band.” In Lee, Richard B. & Daly, R (Eds.), The Cambridge encyclopedia of hunters and gatherers (pp. 399–410). Cambridge: Cambridge.
Zurück zum Zitat Kahneman, D., & Tversky, A. (1979) Prospect theory: An analysis of decision under risk. Econometrica 47 (2):263–292. Kahneman, D., & Tversky, A. (1979) Prospect theory: An analysis of decision under risk. Econometrica 47 (2):263–292.
Zurück zum Zitat Kalat, J. W. (2013). Biological psychology. Belmont: Wadsworth, p. 383. Kalat, J. W. (2013). Biological psychology. Belmont: Wadsworth, p. 383.
Zurück zum Zitat Lee, R. B. & Daly, R. (Eds.) (1999). The Cambridge encyclopedia of hunters and gatherers. Cambridge: Cambridge University Press. Lee, R. B. & Daly, R. (Eds.) (1999). The Cambridge encyclopedia of hunters and gatherers. Cambridge: Cambridge University Press.
Zurück zum Zitat Mill, J.S. (1952). On liberty. In Hutchins, R. M. (Ed.) Great Books of the Western World 43: American State Papers, Federalist, J.S. Mill, (pp. 265-323). Chicago: Encyclopedia Britannica. Mill, J.S. (1952). On liberty. In Hutchins, R. M. (Ed.) Great Books of the Western World 43: American State Papers, Federalist, J.S. Mill, (pp. 265-323). Chicago: Encyclopedia Britannica.
Zurück zum Zitat Miller, L. (2012) The moral philosophy of automobiles. Journal of Agricultural and. Environmental Ethics 25:637–655. Miller, L. (2012) The moral philosophy of automobiles. Journal of Agricultural and. Environmental Ethics 25:637–655.
Zurück zum Zitat Miller, L. (2013) Why should one reproduce? The rationality and morality of human reproduction. Dissertation., City University of New York Graduate Center Miller, L. (2013) Why should one reproduce? The rationality and morality of human reproduction. Dissertation., City University of New York Graduate Center
Zurück zum Zitat Miller, L.F. (2015) “Granting automata human rights: Challenge to a basis of full-rights privilege,” Human Rights Review 16(4): 369–391. Miller, L.F. (2015) “Granting automata human rights: Challenge to a basis of full-rights privilege,” Human Rights Review 16(4): 369–391.
Zurück zum Zitat Miller, L.F. (2016) Review of Sex and the Posthuman Condition. By Michael Hauskeller. Journal of Science and Engineering Ethics 22 (5):1569–1574. Miller, L.F. (2016) Review of Sex and the Posthuman Condition. By Michael Hauskeller. Journal of Science and Engineering Ethics 22 (5):1569–1574.
Zurück zum Zitat Poulsen, Adam Burmeister, Oliver K., Tien, David. (2018) A new design approach for elderly care robots. Australasian Conference on Information Systems, Sydney. Poulsen, Adam Burmeister, Oliver K., Tien, David. (2018) A new design approach for elderly care robots. Australasian Conference on Information Systems, Sydney.
Zurück zum Zitat Preston, R. (2002) The demon in the freezer. New York: Random House. Preston, R. (2002) The demon in the freezer. New York: Random House.
Zurück zum Zitat Raavi, S., & Staab, S. (2010). Underpaid and overworked: A cross-national perspective on care workers. Inter Labor Rev 149 (10):4017–422. Raavi, S., & Staab, S. (2010). Underpaid and overworked: A cross-national perspective on care workers. Inter Labor Rev 149 (10):4017–422.
Zurück zum Zitat Risse, Mathias (2019) Human rights and artificial intelligence: An urgently needed agenda. Hum Rights Q 41(1) :1–16. Risse, Mathias (2019) Human rights and artificial intelligence: An urgently needed agenda. Hum Rights Q 41(1) :1–16.
Zurück zum Zitat Rodogno, R. (2016) Social robots, fiction, and sentimentality. Eth Inform Tech 18(4):257–268. Rodogno, R. (2016) Social robots, fiction, and sentimentality. Eth Inform Tech 18(4):257–268.
Zurück zum Zitat Rollin, B. (1989) The unheeded cry: Animal consciousness, animal pain, and science. Oxford: Oxford University Press. Rollin, B. (1989) The unheeded cry: Animal consciousness, animal pain, and science. Oxford: Oxford University Press.
Zurück zum Zitat Santoni de Sio, F., and van Wynesberghe, A. (2016) When should we use care robots? The nature-of-activities approach. Science Engin Eth 22(6):1745–1760. Santoni de Sio, F., and van Wynesberghe, A. (2016) When should we use care robots? The nature-of-activities approach. Science Engin Eth 22(6):1745–1760.
Zurück zum Zitat Schlosser, E. (2014) Command and control: Nuclear weapons, the Damascus accident, and the illusion of safety. New York: Penguin: Schlosser, E. (2014) Command and control: Nuclear weapons, the Damascus accident, and the illusion of safety. New York: Penguin:
Zurück zum Zitat Seidell, J.C. (2005) Epidemiology – definition and classification of obesity. In Kopelman P.G., Caterson, I. D., Stock, M.J. & Dietz, W.H. (eds.). Clinical obesity in adults and children (pp. 3–11) West Sussex. Wiley-Blackwell Seidell, J.C. (2005) Epidemiology – definition and classification of obesity. In Kopelman P.G., Caterson, I. D., Stock, M.J. & Dietz, W.H. (eds.). Clinical obesity in adults and children (pp. 3–11) West Sussex. Wiley-Blackwell
Zurück zum Zitat Sharkey, A. (2014) Robots and human dignity: a consideration of the effects of robot care on the dignity of older people. Eth Inform Tech 16(1):63–75. Sharkey, A. (2014) Robots and human dignity: a consideration of the effects of robot care on the dignity of older people. Eth Inform Tech 16(1):63–75.
Zurück zum Zitat Sharkey, A. (2015). Robot teachers: The very idea! Behav Brain Sciences 38:46–47. Sharkey, A. (2015). Robot teachers: The very idea! Behav Brain Sciences 38:46–47.
Zurück zum Zitat Sharkey, N. and Sharkey, A. (2010). The crying shame of robot nannies: An ethical appraisal. Interaction Studies 11 (2):161–190. Sharkey, N. and Sharkey, A. (2010). The crying shame of robot nannies: An ethical appraisal. Interaction Studies 11 (2):161–190.
Zurück zum Zitat Sharkey, A., & Sharkey, N. (2012). Granny and the robots: ethical issues in robot care for the elderly. Eth Inform Tech 14:27–40. Sharkey, A., & Sharkey, N. (2012). Granny and the robots: ethical issues in robot care for the elderly. Eth Inform Tech 14:27–40.
Zurück zum Zitat Shaw, M.C. (2001) Engineering problem-solving: A classical approach. Norwich: William Andrew. Shaw, M.C. (2001) Engineering problem-solving: A classical approach. Norwich: William Andrew.
Zurück zum Zitat Shue, Henry (1996) Basic Rights: Subsistence, Affluence, and U.S. Foreign Policy. 2nd ed. Princeton: Princeton University Press. Shue, Henry (1996) Basic Rights: Subsistence, Affluence, and U.S. Foreign Policy. 2nd ed. Princeton: Princeton University Press.
Zurück zum Zitat Sorell, T., and Draper, H. (2014). Robot carers, ethics, and older people. Eth Inform Tech 16(3):183–195. Sorell, T., and Draper, H. (2014). Robot carers, ethics, and older people. Eth Inform Tech 16(3):183–195.
Zurück zum Zitat Sparrow, R., & Sparrow, L. (2006). In the hands of machines? The future of aged care. Mind Machine 16:141–161. Sparrow, R., & Sparrow, L. (2006). In the hands of machines? The future of aged care. Mind Machine 16:141–161.
Zurück zum Zitat Ting G.H.Y, Woo J. (2009) Elder care: Is legislation of family responsibility the solution? Asian J Geront Geriat 4: 72–75. Ting G.H.Y, Woo J. (2009) Elder care: Is legislation of family responsibility the solution? Asian J Geront Geriat 4: 72–75.
Zurück zum Zitat Tronto, J.C. (1993) Moral boundaries: A political argument for an ethic of care. New York: Routledge. Tronto, J.C. (1993) Moral boundaries: A political argument for an ethic of care. New York: Routledge.
Zurück zum Zitat United Nations (1948) The universal declaration of human rights. New York: United Nations. United Nations (1948) The universal declaration of human rights. New York: United Nations.
Zurück zum Zitat United Nations (2005) United Nations declaration of human cloning. New York: United Nations. United Nations (2005) United Nations declaration of human cloning. New York: United Nations.
Zurück zum Zitat Vallor, S. (2016) Technology and the virtues: A philosophical guide to a future worth wanting. Oxford: Oxford University Press. Vallor, S. (2016) Technology and the virtues: A philosophical guide to a future worth wanting. Oxford: Oxford University Press.
Zurück zum Zitat Van der Plas, A., Smits, M., and Werhmann, C. (2010). Beyond speculative robot ethics: A vision assessment study on the future of the robotic caretaker. Accountability in Research 17(6):299–315. Van der Plas, A., Smits, M., and Werhmann, C. (2010). Beyond speculative robot ethics: A vision assessment study on the future of the robotic caretaker. Accountability in Research 17(6):299–315.
Zurück zum Zitat Van Wynesberghe, A. (2013) Designing robots for care: Care centered value-sensitive design. Science Engin Eth 19 (2):407–433. Van Wynesberghe, A. (2013) Designing robots for care: Care centered value-sensitive design. Science Engin Eth 19 (2):407–433.
Zurück zum Zitat Van Wynesberghe, A. (2016) Service robots, care ethics, and design. Science J Eth Info Tech 18(4):311–321. Van Wynesberghe, A. (2016) Service robots, care ethics, and design. Science J Eth Info Tech 18(4):311–321.
Zurück zum Zitat Verbeek, P. (2005) What things do: Philosophical reflections on technology, agency, and design. University Park: Pennsylvania State University Press. Verbeek, P. (2005) What things do: Philosophical reflections on technology, agency, and design. University Park: Pennsylvania State University Press.
Metadaten
Titel
Human Rights of Users of Humanlike Care Automata
verfasst von
Lantz Fleming Miller
Publikationsdatum
01.02.2020
Verlag
Springer Netherlands
Erschienen in
Human Rights Review / Ausgabe 2/2020
Print ISSN: 1524-8879
Elektronische ISSN: 1874-6306
DOI
https://doi.org/10.1007/s12142-020-00581-2

Weitere Artikel der Ausgabe 2/2020

Human Rights Review 2/2020 Zur Ausgabe