Skip to main content
Erschienen in: International Journal of Social Robotics 4/2021

23.05.2020

Trust in and Ethical Design of Carebots: The Case for Ethics of Care

verfasst von: Gary Chan Kok Yew

Erschienen in: International Journal of Social Robotics | Ausgabe 4/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The paper has two main objectives: to examine the challenges arising from the use of carebots as well as to discuss how the design of carebots can deal with these challenges. First, it notes that the use of carebots to take care of the physical and mental health of the elderly, children and the disabled as well as to serve as assistive tools and social companions encounter a few main challenges. They relate to the extent of the care robots’ ability to care for humans, potential deception by robot morphology and communications, (over)reliance on or attachment to robots, and the risks of carebot use without informed consent and potential infringements of privacy. Secondly, these challenges impinge upon issues of ethics and trust which are somewhat overlapping in terms of concept and practice. The existing ethical guidelines, standards and regulations are general in nature and lack a central ethical framework and concrete principles applicable to the care contexts. Hence, to deal with these important challenges, it is proposed in the third part of the paper that carebots be designed by taking account of Ethics of Care as the central ethical framework. It argues that the Ethics of Care offer the following advantages: (a) it provides sufficiently concrete principles and embodies values that are sensitive and applicable to the design of carebots and the contexts of caring practices; (b) it coheres with the tenets of Principlism and select ethical theories (utilitarianism, deontology and virtue ethics); and (c) it is closely associated with the preservation and maintenance of trust.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
Sorell and Draper [67].
 
2
Dautenhahn and Werry [19].
 
3
Dautenhahn and Werry [19, p. 5].
 
4
Wainer et al. [85].
 
5
Alemi et al. [1] and Meghdari et al. [47, 48] relating to children suffering from cancer; Meghdari et al. [49] on children with hearing disabilities.
 
6
The social robot was jointly developed by NEC Japan and RECCSI (Research Centre for Computers, Communication and Social, Innovation) of La Trobe University in Melbourne. It possesses human-like attributes such as baby-face appearance, human voice, facial expressions, gestures and body movements and is able to recognise voices, human faces, emotions and possesses speech acoustics recognition.
 
7
“Assistive Technologies to Improve Healthcare quality, productivity” MIS Today, 27 Feb 2016.
 
8
“Charting the way to hospitals of the future” 23 January 2018 at https://​edb.​gov.​sg/​en/​news-and-resources/​insights/​innovation/​charting-the-way-to-hospitals-of-the-future.​html (accessed on 20 February 2019). These rehabilitative robots are developed by The Centre for Healthcare Assistive & Robotics Technology (CHART) in Singapore.
 
9
Sparrow and Sparrow [68].
 
10
This does not necessarily mean robots cannot have internal states that mirror human emotions and which can be communicated to humans through facial expression, body posture and tone of voice: see Breazeal and Brooks [10] on the Kismet social robot with three-dimensional affect space of arousal; valence and stance.
 
11
Metzinger [50].
 
12
Pour et al. [57].
 
13
Wallach and Allen [86, p. 152].
 
14
Coeckelbergh [15, p. 238].
 
15
Coeckelbergh [14, p. 219].
 
16
Mittelstadt [51].
 
17
Darling [18].
 
18
See Burrell [12] on the economic and social inequalities that can arise from the opacity of machine learning algorithms including her examples concerning Internet fraud and spam filtering.
 
19
Grodzinsky et al. [33, p. 97].
 
20
Grodzinsky et al. [33, at p. 97].
 
21
Grodzinsky et al. [33, p. 99].
 
22
Isaac and Bridewell [37].
 
23
Schermer [61, p. 160].
 
24
Schermer [61, p. 165].
 
25
See Felzmann et al. [24] on relational understanding of transparency that is dependent on the communication between technology providers and users, where trustworthiness is assessed based on contextual factors that mediate the value of such communications including factors that make transparency meaningful and trustworthy in the users’ eyes.
 
26
Grodzinsky et al. [33].
 
27
Coeckelbergh [16, p. 288].
 
28
Riek and Howard [58].
 
29
Sharkey and Sharkey [64].
 
30
Parks [56].
 
31
Loder and Nicholas [44].
 
32
Sharkey and Sharkey [63].
 
33
Vallor [78, p. 223].
 
34
Vallor [78, p. 226].
 
35
Borenstein et al [9, p. 135; 84].
 
36
European Parliament [22, at para 32].
 
37
Coeckelbergh et al. [17].
 
38
Vallor [77].
 
39
See Nussbaum [54] cited in Coeckelbergh [15].
 
40
Leenes et al. [42, at p. 22].
 
41
Ienca et al. [36].
 
42
Fosch-Villaronga and Albo-Canals [26 at p. 83].
 
43
Para 125.
 
44
Gambetta [29, p. 217].
 
45
I have omitted Taddeo’s point that “The trustee B may or may not be aware that trustor A trusts B”. This is merely a neutral feature.
 
46
Taddeo [71, p. 244].
 
47
At p. 15. See also Kirkpatrick et al. [40]
 
48
Buechner and Tavani [11].
 
49
This focus on stakeholders is also consistent with the notion of Responsible Research and Innovation that take account of societal impacts under the European Union’s Framework Programmes for Research and Technological Development: see Von Schomberg [83].
 
50
Wagner et al. [84 at 22].
 
51
Banavar [6].
 
52
Wallach and Allen [86, pp. 79–80].
 
53
Wallach and Allen [86, p. 80].
 
54
For a macro-perspective of the regulatory process of robot governance, see Fosch-Villaronga and Heldeweg [27].
 
55
Wallach and Allen [86, p. 81].
 
56
Sparrow and Sparrow [68].
 
57
Vandemeulebroucke et al. [81].
 
58
Fosch-Villaronga and Özcan [28].
 
59
Grodzinsky et al [32, p. 26].
 
60
Ojha et al [55] on the development of the Ethical Emotion Generation System (EEGS) based on data structure of emotions represented in the form of (Name, Valence, Degree, Threshold, Intensity, Decay Time), where Name denotes the name for the type of the emotion, Valence specifies whether the emotion is positive or negative, Degree represents the extent of the positivity and negativity of the emotion, Threshold represents the minimum intensity required to trigger the emotion, Intensity represents the strength of the emotional experience and Decay Time denotes the time required to drop the emotion intensity back to 0.
 
61
Wallach and Allen [86, p. 98].
 
62
See for example Patricia Churchland on connectionist learning for moral cognition.
 
63
This was what Swanton [69, p. 273] termed the “problem of indeterminacy”.
 
64
Vallor [78, p. 221].
 
65
Wallach and Allen [86, p. 98].
 
66
Beauchamp and Childress [7, pp. 12–13].
 
67
Childress [13, p. 68].
 
68
Sorell and Draper [67].
 
69
Ross [59].
 
70
Anderson and Anderson [2].
 
71
Wallach and Allen [86, pp. 128–129].
 
72
Anderson and Anderson [3].
 
73
Asaro [4, p. 14].
 
74
E.g. European Commission [21] (that trustworthy AI systems should be (1) lawful, (2) ethical, and (3) robust from a technical and social perspective; and that ‘key requirements for Trustworthy AI are: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability’’ (p. 4).
 
75
Para 32.
 
76
Para 71.
 
77
Para 82.
 
79
Article 19.
 
80
Van Est et al. [79] at para 3.9.2.
 
84
Winfield and Jirotka [87, p. 3].
 
85
E.g. Slote [66].
 
86
Slote [66, p. 12].
 
87
Engster [20, p. 2].
 
88
Held [34, p. 546].
 
89
Held [34, p. 51].
 
90
Held [34, p. 72].
 
91
Slote [66, p. 4].
 
92
Tronto [72, p. 108].
 
93
Tronto [72, pp. 105–108].
 
94
Tronto [73].
 
95
Tronto [73, p. 167].
 
96
Tronto [73, p. 167].
 
97
Vanlaere and Gastmans [82].
 
98
Engster [20, p. 4].
 
99
Engster [20, p. 11].
 
100
Lehoux and Grimard [43, p. 334].
 
101
European Parliament [22, at para 32].
 
103
van Wynsberghe [80]. See also Salvini [60, p. 436].
 
104
van Wynsberghe [80, p. 420].
 
105
Riek and Howard [58].
 
106
Khosla et al. [39, para 6.3].
 
107
Slote [66, p. 43].
 
108
See chapter 12 on “Virtues of Practice”.
 
109
Swanton [69, p. 253].
 
110
Swanton [69, p. 256].
 
111
Swanton [69, p. 258].
 
112
Held [34, p. 42].
 
Literatur
1.
Zurück zum Zitat Alemi A, Ghanbarzadeh A, Meghdari A, Moghadam LJ (2016) Clinical application of a humanoid robot in pediatric cancer interventions. Int J Soc Robot 8:743–759CrossRef Alemi A, Ghanbarzadeh A, Meghdari A, Moghadam LJ (2016) Clinical application of a humanoid robot in pediatric cancer interventions. Int J Soc Robot 8:743–759CrossRef
3.
Zurück zum Zitat Anderson M, Anderson SL (2010) Robot be Good: a call for ethical autonomous machines. Sci Am 303(4):72–77CrossRef Anderson M, Anderson SL (2010) Robot be Good: a call for ethical autonomous machines. Sci Am 303(4):72–77CrossRef
4.
Zurück zum Zitat Asaro PM (2006) What should we want from a robot ethic? Int Rev Inf Ethics 6(12):9–16CrossRef Asaro PM (2006) What should we want from a robot ethic? Int Rev Inf Ethics 6(12):9–16CrossRef
5.
Zurück zum Zitat Baier A (1987) Hume: the woman’s moral theorist? In: Kittay EV, Meyers D (eds) Women and moral theory. Rowman & Littlefield, Lanham Baier A (1987) Hume: the woman’s moral theorist? In: Kittay EV, Meyers D (eds) Women and moral theory. Rowman & Littlefield, Lanham
7.
Zurück zum Zitat Beauchamp TL, Childress JF (2009) Principles of biomedical ethics, 6th edn. Oxford University Press, New York Beauchamp TL, Childress JF (2009) Principles of biomedical ethics, 6th edn. Oxford University Press, New York
8.
Zurück zum Zitat Borenstein J, Pearson Y (2010) Robot caregivers: harbingers of expanded freedom for all? Ethics Inf Technol 12(3):277–288CrossRef Borenstein J, Pearson Y (2010) Robot caregivers: harbingers of expanded freedom for all? Ethics Inf Technol 12(3):277–288CrossRef
9.
Zurück zum Zitat Borenstein J, Howard A, Wagner AR (2017) Pediatric robots and ethics: the robot is ready to see you now, but should it be trusted? In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 127–141 Borenstein J, Howard A, Wagner AR (2017) Pediatric robots and ethics: the robot is ready to see you now, but should it be trusted? In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 127–141
10.
Zurück zum Zitat Breazeal C, Brooks R (2005) Robot emotion: a functional perspective. In: Fellous J-M, Arbib MA (eds) Who needs emotions? The brain meets the robot. Oxford University Press, Oxford, pp 271–310CrossRef Breazeal C, Brooks R (2005) Robot emotion: a functional perspective. In: Fellous J-M, Arbib MA (eds) Who needs emotions? The brain meets the robot. Oxford University Press, Oxford, pp 271–310CrossRef
11.
Zurück zum Zitat Buechner J, Tavani HT (2011) Trust and multi-agent systems: applying the ‘diffuse, default model’ of trust to experiments involving artificial agents. Ethics Inf Technol 13(1):39–51CrossRef Buechner J, Tavani HT (2011) Trust and multi-agent systems: applying the ‘diffuse, default model’ of trust to experiments involving artificial agents. Ethics Inf Technol 13(1):39–51CrossRef
12.
Zurück zum Zitat Burrell J (2016) How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc 3(1):1–12CrossRef Burrell J (2016) How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc 3(1):1–12CrossRef
13.
Zurück zum Zitat Childress JF (2012) A principle-based approach. In: Kuhse H, Singer P (eds) A companion to bioethics, 2nd edn. Wiley-Blackwell, Hoboken, pp 67–76 Childress JF (2012) A principle-based approach. In: Kuhse H, Singer P (eds) A companion to bioethics, 2nd edn. Wiley-Blackwell, Hoboken, pp 67–76
14.
Zurück zum Zitat Coeckelbergh M (2009) Personal robots, appearance, and human good: a methodological reflection on roboethics. Int J Soc Robot 1:217–221CrossRef Coeckelbergh M (2009) Personal robots, appearance, and human good: a methodological reflection on roboethics. Int J Soc Robot 1:217–221CrossRef
15.
Zurück zum Zitat Coeckelbergh M (2010) Healthcare, capabilities and AI assistive technologies. Ethic Theory Moral Pract 13:181–190CrossRef Coeckelbergh M (2010) Healthcare, capabilities and AI assistive technologies. Ethic Theory Moral Pract 13:181–190CrossRef
16.
Zurück zum Zitat Coeckelbergh M (2012) Care robots, virtual virtue, and the best possible life. In: Brey P, Briggle A, Spence E (eds) The good life in a technological age. Taylor & Francis, Abingdon, pp 281–292 Coeckelbergh M (2012) Care robots, virtual virtue, and the best possible life. In: Brey P, Briggle A, Spence E (eds) The good life in a technological age. Taylor & Francis, Abingdon, pp 281–292
17.
Zurück zum Zitat Coeckelbergh M, Pop C, Simut R, Peca A, Pintea S, David D, Vanderborght B (2016) A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: ethical acceptability, trust, sociability, appearance, and attachment. Sci Eng Ethics 22(1):47–65CrossRef Coeckelbergh M, Pop C, Simut R, Peca A, Pintea S, David D, Vanderborght B (2016) A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: ethical acceptability, trust, sociability, appearance, and attachment. Sci Eng Ethics 22(1):47–65CrossRef
18.
Zurück zum Zitat Darling K (2017) Who’s Johnny? An anthropomorphic framing in human–robot interaction, integration, and policy. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 173–188 Darling K (2017) Who’s Johnny? An anthropomorphic framing in human–robot interaction, integration, and policy. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 173–188
19.
Zurück zum Zitat Dautenhahn K, Werry I (2014) Towards interactive robots in autism therapy: background, motivation and challenges. Pragmat Cogn 12(1):1–35CrossRef Dautenhahn K, Werry I (2014) Towards interactive robots in autism therapy: background, motivation and challenges. Pragmat Cogn 12(1):1–35CrossRef
20.
Zurück zum Zitat Engster D (2007) The heart of justice. Oxford University Press, OxfordCrossRef Engster D (2007) The heart of justice. Oxford University Press, OxfordCrossRef
21.
Zurück zum Zitat European Commission (2019) Ethics guidelines for trustworthy artificial intelligence. High-Level Expert Group on AI European Commission (2019) Ethics guidelines for trustworthy artificial intelligence. High-Level Expert Group on AI
22.
Zurück zum Zitat European Parliament (2017) Civil law rules on robotics. european parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) European Parliament (2017) Civil law rules on robotics. european parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))
23.
Zurück zum Zitat European Parliament (2019) A comprehensive European industrial policy on artificial intelligence and robotics. European Parliament resolution of 12 February 2019 (2018/2088(INI)) European Parliament (2019) A comprehensive European industrial policy on artificial intelligence and robotics. European Parliament resolution of 12 February 2019 (2018/2088(INI))
24.
Zurück zum Zitat Felzmann H, Fosch-Villaronga E, Lutz C, Tamo-Larrieux A (2019) Transparency you can trust: transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc January to June 6(1):1–14 Felzmann H, Fosch-Villaronga E, Lutz C, Tamo-Larrieux A (2019) Transparency you can trust: transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc January to June 6(1):1–14
26.
Zurück zum Zitat Fosch-Villaronga E, Albo-Canals J (2019) Ï’ll take care of you”, said the robot. Paladyn J Behav Robot 10:77–93CrossRef Fosch-Villaronga E, Albo-Canals J (2019) Ï’ll take care of you”, said the robot. Paladyn J Behav Robot 10:77–93CrossRef
27.
Zurück zum Zitat Fosch-Villaronga E, Heldeweg M (2018) “Regulation, I presume?” Said the robot—towards an iterative regulatory process for robot governance. Comput Law Secur Rev 34(6):1258–1277CrossRef Fosch-Villaronga E, Heldeweg M (2018) “Regulation, I presume?” Said the robot—towards an iterative regulatory process for robot governance. Comput Law Secur Rev 34(6):1258–1277CrossRef
29.
Zurück zum Zitat Gambetta D (1998) Can we trust trust? In: Gambetta D (ed) Trust: making and breaking cooperative relations. Basil Blackwell, Oxford, pp 213–238 Gambetta D (1998) Can we trust trust? In: Gambetta D (ed) Trust: making and breaking cooperative relations. Basil Blackwell, Oxford, pp 213–238
30.
Zurück zum Zitat Gilligan C (1982) In a different voice: psychological theory and women’s development. Harvard University Press, Cambridge Gilligan C (1982) In a different voice: psychological theory and women’s development. Harvard University Press, Cambridge
31.
Zurück zum Zitat Gompei T, Umemuro H (2018) Factors and development of cognitive and affective trust on social robots. In: Sam-Ge S, Cabibihan J-J, Salichs MA, Broadbent E, He H, Wagner AR, Castro-González Á (eds) Lecture notes in computer science, vol 11357. Springer, New York, pp 45–54 Gompei T, Umemuro H (2018) Factors and development of cognitive and affective trust on social robots. In: Sam-Ge S, Cabibihan J-J, Salichs MA, Broadbent E, He H, Wagner AR, Castro-González Á (eds) Lecture notes in computer science, vol 11357. Springer, New York, pp 45–54
32.
Zurück zum Zitat Grodzinsky FS, Miller KW, Martin MJ (2011) Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. Ethics Inform Tech 13(1):17–27CrossRef Grodzinsky FS, Miller KW, Martin MJ (2011) Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. Ethics Inform Tech 13(1):17–27CrossRef
33.
Zurück zum Zitat Grodzinsky FS, Miller KW, Wolf MJ (2015) Developing automated deceptions and the impact on trust. Philos Technol 28:91–105CrossRef Grodzinsky FS, Miller KW, Wolf MJ (2015) Developing automated deceptions and the impact on trust. Philos Technol 28:91–105CrossRef
34.
35.
Zurück zum Zitat Hoorn JF, Winter SD (2018) Here comes the bad news: doctor robot taking over. Int J Soc Robot 10(4):519–535CrossRef Hoorn JF, Winter SD (2018) Here comes the bad news: doctor robot taking over. Int J Soc Robot 10(4):519–535CrossRef
36.
Zurück zum Zitat Ienca M, Jotterand F, Vica C, Elger B (2016) Social and assistive robotics in dementia care: ethical recommendations for research and practice. Int J Soc Robot 8:565–573CrossRef Ienca M, Jotterand F, Vica C, Elger B (2016) Social and assistive robotics in dementia care: ethical recommendations for research and practice. Int J Soc Robot 8:565–573CrossRef
37.
Zurück zum Zitat Isaac AMC, Bridewell W (2017) White lies on silver tongues: why robots need to deceive (and how). In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 157–172 Isaac AMC, Bridewell W (2017) White lies on silver tongues: why robots need to deceive (and how). In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 157–172
38.
Zurück zum Zitat Jones K (1996) Trust as an affective attitude. Ethics Inf Technol 107(1):4–25 Jones K (1996) Trust as an affective attitude. Ethics Inf Technol 107(1):4–25
39.
Zurück zum Zitat Khosla R, Nguyen K, Chu M-T (2017) Human robot engagement and acceptability in residential aged care. Int J Hum Comput Interact 33(6):510–522CrossRef Khosla R, Nguyen K, Chu M-T (2017) Human robot engagement and acceptability in residential aged care. Int J Hum Comput Interact 33(6):510–522CrossRef
40.
Zurück zum Zitat Kirkpatrick J, Hahn EN, Haufler AJ (2017) Trust and human–robot interactions. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 142–156 Kirkpatrick J, Hahn EN, Haufler AJ (2017) Trust and human–robot interactions. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 142–156
41.
Zurück zum Zitat Kittay E (2002) Love’s labor revisited. Hypatia 17(3):237–250 Kittay E (2002) Love’s labor revisited. Hypatia 17(3):237–250
42.
Zurück zum Zitat Leenes R, Palmerini E, Koops B-J, Bertolini A, Salvini P, Lucivero F (2017) Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law Innov Technol 9:1CrossRef Leenes R, Palmerini E, Koops B-J, Bertolini A, Salvini P, Lucivero F (2017) Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law Innov Technol 9:1CrossRef
43.
Zurück zum Zitat Lehoux P, Grimard D (2018) When robots care: public deliberations on how technology and humans may support independent living for older adults. Soc Sci Med 211:330–337CrossRef Lehoux P, Grimard D (2018) When robots care: public deliberations on how technology and humans may support independent living for older adults. Soc Sci Med 211:330–337CrossRef
44.
Zurück zum Zitat Loder J, Nicholas L (2018) Confronting Dr Robot: creating a people-powered future for AI in health. Nesta Health Lab, London Loder J, Nicholas L (2018) Confronting Dr Robot: creating a people-powered future for AI in health. Nesta Health Lab, London
45.
Zurück zum Zitat Luhmann N (1979) Trust and power. Wiley, Chichester Luhmann N (1979) Trust and power. Wiley, Chichester
46.
Zurück zum Zitat Meacham D, Studley M (2017) Could a robot care? It’s all in the movement. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 97–112 Meacham D, Studley M (2017) Could a robot care? It’s all in the movement. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 97–112
47.
Zurück zum Zitat Meghdari A, Shariati A, Alemi M, Vossoughi GR, Eydi A (2018) Arash: a social robot buddy to support children with cancer in a hospital environment. Proc Inst Mech Eng 232(6):605–618CrossRef Meghdari A, Shariati A, Alemi M, Vossoughi GR, Eydi A (2018) Arash: a social robot buddy to support children with cancer in a hospital environment. Proc Inst Mech Eng 232(6):605–618CrossRef
48.
Zurück zum Zitat Meghdari A, Shariati A, Alemi M, Nobaveh AA (2018) Design performance characteristics of a social robot companion “Arash” for pediatric hospitals. Int J Humanoid Rob 15(5):1850019CrossRef Meghdari A, Shariati A, Alemi M, Nobaveh AA (2018) Design performance characteristics of a social robot companion “Arash” for pediatric hospitals. Int J Humanoid Rob 15(5):1850019CrossRef
49.
Zurück zum Zitat Meghdari A, Alemi M, Zakipour M, Kashanian SA (2019) Design and realization of a sign language educational humanoid robot. J Intell Rob Syst 95:3–17CrossRef Meghdari A, Alemi M, Zakipour M, Kashanian SA (2019) Design and realization of a sign language educational humanoid robot. J Intell Rob Syst 95:3–17CrossRef
50.
Zurück zum Zitat Metzinger T (2013) Two principles for robot ethics. In: Hilgendorf E, Gunther JP (eds) Robotik und Gesetzbung. Nomos, Baden-Baden, pp 263–302 Metzinger T (2013) Two principles for robot ethics. In: Hilgendorf E, Gunther JP (eds) Robotik und Gesetzbung. Nomos, Baden-Baden, pp 263–302
51.
Zurück zum Zitat Mittelstadt B (2017) The doctor will not see you now. In: Otto P, Graf E (eds) 3TH1CS: a reinvention of ethics in the digital age?. IRights Media, Berlin, pp 68–77 Mittelstadt B (2017) The doctor will not see you now. In: Otto P, Graf E (eds) 3TH1CS: a reinvention of ethics in the digital age?. IRights Media, Berlin, pp 68–77
52.
Zurück zum Zitat Nissenbaum H (2001) Securing trust online: wisdom or oxymoron. Boston Univ Law Revew 81(3):635–664 Nissenbaum H (2001) Securing trust online: wisdom or oxymoron. Boston Univ Law Revew 81(3):635–664
53.
Zurück zum Zitat Noddings N (2013) Caring: a relational approach to ethics and moral education. University of California Press, BerkeleyCrossRef Noddings N (2013) Caring: a relational approach to ethics and moral education. University of California Press, BerkeleyCrossRef
54.
Zurück zum Zitat Nussbaum MC (2006) Frontiers of justice: disability, nationality, species membership. Belknap Press, Cambridge Nussbaum MC (2006) Frontiers of justice: disability, nationality, species membership. Belknap Press, Cambridge
55.
Zurück zum Zitat Ojha S, Williams M-A, Johnston B (2018) The essence of ethical reasoning in robot–emotion processing. Int J Socl Robot 10:211–223CrossRef Ojha S, Williams M-A, Johnston B (2018) The essence of ethical reasoning in robot–emotion processing. Int J Socl Robot 10:211–223CrossRef
56.
Zurück zum Zitat Parks JA (2010) Lifting the burden of women’s care work: should robots replace the “human touch”? Hypatia 25:100–120CrossRef Parks JA (2010) Lifting the burden of women’s care work: should robots replace the “human touch”? Hypatia 25:100–120CrossRef
57.
Zurück zum Zitat Pour AG, Taheri A, Alemi M, Meghdari A (2018) Human–robot facial expression reciprocal interaction platform: case studies on children with autism. Int J Soc Robot 10:179–198CrossRef Pour AG, Taheri A, Alemi M, Meghdari A (2018) Human–robot facial expression reciprocal interaction platform: case studies on children with autism. Int J Soc Robot 10:179–198CrossRef
59.
Zurück zum Zitat Ross WD (2003) The right and the good, 2nd edn. Clarendon Press, Oxford Ross WD (2003) The right and the good, 2nd edn. Clarendon Press, Oxford
60.
Zurück zum Zitat Salvini P (2015) On ethical, legal and social issues of care robots. In: Mohammed S, Moreno J, Kong K, Amirat Y (eds) Intelligent assistive robots: recent advances in assistive robots for everyday activities. Springer, New York, pp 431–445CrossRef Salvini P (2015) On ethical, legal and social issues of care robots. In: Mohammed S, Moreno J, Kong K, Amirat Y (eds) Intelligent assistive robots: recent advances in assistive robots for everyday activities. Springer, New York, pp 431–445CrossRef
61.
Zurück zum Zitat Schermer M (2014) Telling the truth: the ethics of deception and white lies in dementia care. In: Foster C, Herring J, Doron I (eds) The law and ethics of dementia. Hart Publishing, Oxford Schermer M (2014) Telling the truth: the ethics of deception and white lies in dementia care. In: Foster C, Herring J, Doron I (eds) The law and ethics of dementia. Hart Publishing, Oxford
62.
Zurück zum Zitat Searle J (1980) Minds, brains and programs. Behav Brain Sci 3(3):417–457CrossRef Searle J (1980) Minds, brains and programs. Behav Brain Sci 3(3):417–457CrossRef
63.
Zurück zum Zitat Sharkey N, Sharkey A (2010) Living with robots: ethical tradeoffs in eldercare. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins, Amsterdam, pp 245–256CrossRef Sharkey N, Sharkey A (2010) Living with robots: ethical tradeoffs in eldercare. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins, Amsterdam, pp 245–256CrossRef
64.
65.
66.
67.
Zurück zum Zitat Sorell T, Draper H (2014) Robot carers, ethics, and older people. Ethics Inf Technol 16:183–195CrossRef Sorell T, Draper H (2014) Robot carers, ethics, and older people. Ethics Inf Technol 16:183–195CrossRef
68.
Zurück zum Zitat Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16(2):141–161CrossRef Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16(2):141–161CrossRef
69.
Zurück zum Zitat Swanton C (2003) Virtue ethics: a pluralistic view. Oxford University Press, OxfordCrossRef Swanton C (2003) Virtue ethics: a pluralistic view. Oxford University Press, OxfordCrossRef
70.
Zurück zum Zitat Taddeo M (2009) Defining trust and E-trust: from old theories to new problems. Int J Technol Hum Int 5(2):23–35CrossRef Taddeo M (2009) Defining trust and E-trust: from old theories to new problems. Int J Technol Hum Int 5(2):23–35CrossRef
71.
Zurück zum Zitat Taddeo M (2010) Modeling trust in artificial agents: a first step toward the analysis of e-trust. Mind Mach 29(2):243–257CrossRef Taddeo M (2010) Modeling trust in artificial agents: a first step toward the analysis of e-trust. Mind Mach 29(2):243–257CrossRef
72.
Zurück zum Zitat Tronto J (1993) Moral boundaries: a political argument for an ethic of care. Routledge, New York Tronto J (1993) Moral boundaries: a political argument for an ethic of care. Routledge, New York
73.
Zurück zum Zitat Tronto J (2010) Creating caring institutions: politics, plurality, and purpose. Ethics Soc Welf 4(2):158–171CrossRef Tronto J (2010) Creating caring institutions: politics, plurality, and purpose. Ethics Soc Welf 4(2):158–171CrossRef
74.
Zurück zum Zitat Tuomela M, Hofmann S (2003) Simulating rational social normative trust, predictive trust, and predictive reliance between agents. Ethics Inf Technol 5:163–176CrossRef Tuomela M, Hofmann S (2003) Simulating rational social normative trust, predictive trust, and predictive reliance between agents. Ethics Inf Technol 5:163–176CrossRef
76.
Zurück zum Zitat United Nations Educational Scientific and Cultural Organization (UNESCO) and World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) (2017). Report of COMEST on Robotics Ethics. 14 September 2017 United Nations Educational Scientific and Cultural Organization (UNESCO) and World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) (2017). Report of COMEST on Robotics Ethics. 14 September 2017
77.
Zurück zum Zitat Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24:251–268CrossRef Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24:251–268CrossRef
78.
Zurück zum Zitat Vallor S (2016) Technology and the virtues. Oxford University Press, OxfordCrossRef Vallor S (2016) Technology and the virtues. Oxford University Press, OxfordCrossRef
79.
Zurück zum Zitat Van Est R, Gerritsen JBA, Kool L (2017) Human rights in the robot age: challenges arising from the use of robotics, artificial intelligence, and virtual and augmented reality—expert report written for the Committee on Culture, Science, Education and Media of the Parliamentary Assembly of the Council of Europe (PACE). Rathenau Instituut, The Hague Van Est R, Gerritsen JBA, Kool L (2017) Human rights in the robot age: challenges arising from the use of robotics, artificial intelligence, and virtual and augmented reality—expert report written for the Committee on Culture, Science, Education and Media of the Parliamentary Assembly of the Council of Europe (PACE). Rathenau Instituut, The Hague
80.
Zurück zum Zitat van Wynsberghe A (2013) Designing robots for care: care centered value-sensitive design. Sci Eng Ethics 19:407–433CrossRef van Wynsberghe A (2013) Designing robots for care: care centered value-sensitive design. Sci Eng Ethics 19:407–433CrossRef
81.
Zurück zum Zitat Vandemeulebroucke T, de Casterle BD, Gastmans C (2018) The use of care robots in aged care: a systematic review of argument-based ethics literature. Arch Gerontol Geriatr 74:15–25CrossRef Vandemeulebroucke T, de Casterle BD, Gastmans C (2018) The use of care robots in aged care: a systematic review of argument-based ethics literature. Arch Gerontol Geriatr 74:15–25CrossRef
82.
Zurück zum Zitat Vanlaere L, Gastmans C (2011) A personalist approach to care ethics. Nurs Ethics 18(2):161–173CrossRef Vanlaere L, Gastmans C (2011) A personalist approach to care ethics. Nurs Ethics 18(2):161–173CrossRef
83.
Zurück zum Zitat Von Schomberg R (2013) A vision of responsible innovation. In: Owen R, Heintz M, Bessant J (eds) Responsible innovation. Wiley, London, pp 51–74CrossRef Von Schomberg R (2013) A vision of responsible innovation. In: Owen R, Heintz M, Bessant J (eds) Responsible innovation. Wiley, London, pp 51–74CrossRef
84.
Zurück zum Zitat Wagner AR, Borenstein J, Howard A (2018) Overtrust in the robotic age: the ethical challenge. Commun ACM 61(99):22–24CrossRef Wagner AR, Borenstein J, Howard A (2018) Overtrust in the robotic age: the ethical challenge. Commun ACM 61(99):22–24CrossRef
85.
Zurück zum Zitat Wainer J, Dautenhahn K, Robins B, Amirabdollahian F (2014) A pilot study with a novel setup for collaborative play of the humanoid robot KASPAR with children with Autism. Int J Soc Robot 6(1):45–65CrossRef Wainer J, Dautenhahn K, Robins B, Amirabdollahian F (2014) A pilot study with a novel setup for collaborative play of the humanoid robot KASPAR with children with Autism. Int J Soc Robot 6(1):45–65CrossRef
86.
Zurück zum Zitat Wallach W, Allen C (2009) Moral machines: teaching robots rights from wrong. Oxford University Press, OxfordCrossRef Wallach W, Allen C (2009) Moral machines: teaching robots rights from wrong. Oxford University Press, OxfordCrossRef
87.
Zurück zum Zitat Winfield AFT, Jirotka M (2018) Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos Trans R Soc A 376:20180085CrossRef Winfield AFT, Jirotka M (2018) Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos Trans R Soc A 376:20180085CrossRef
Metadaten
Titel
Trust in and Ethical Design of Carebots: The Case for Ethics of Care
verfasst von
Gary Chan Kok Yew
Publikationsdatum
23.05.2020
Verlag
Springer Netherlands
Erschienen in
International Journal of Social Robotics / Ausgabe 4/2021
Print ISSN: 1875-4791
Elektronische ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-020-00653-w

Weitere Artikel der Ausgabe 4/2021

International Journal of Social Robotics 4/2021 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.