Skip to main content

2019 | OriginalPaper | Buchkapitel

Explaining Sympathetic Actions of Rational Agents

verfasst von : Timotheus Kampik, Juan Carlos Nieves, Helena Lindgren

Erschienen in: Explainable, Transparent Autonomous Agents and Multi-Agent Systems

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Typically, humans do not act purely rationally in the sense of classic economic theory. Different patterns of human actions have been identified that are not aligned with the traditional view of human actors as rational agents that act to maximize their own utility function. For instance, humans often act sympathetically – i.e., they choose actions that serve others in disregard of their egoistic preferences. Even if there is no immediate benefit resulting from a sympathetic action, it can be beneficial for the executing individual in the long run. This paper builds upon the premise that it can be beneficial to design autonomous agents that employ sympathetic actions in a similar manner as humans do. We create a taxonomy of sympathetic actions, that reflects different goal types an agent can have to act sympathetically. To ensure that the sympathetic actions are recognized as such, we propose different explanation approaches autonomous agents may use. In this context, we focus on human-agent interaction scenarios. As a first step towards an empirical evaluation, we conduct a preliminary human-robot interaction study that investigates the effect of explanations of (somewhat) sympathetic robot actions on the human participants of human-robot ultimatum games. While the study does not provide statistically significant findings (but notable differences), it can inform future in-depth empirical evaluations.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
For a survey of XAI research, see: Adadi and Berrada [1].
 
2
The cited definition is based on another definition introduced by Doshi-Velez and Kim [10].
 
3
Note that we use the term sympathetic and not altruistic actions because for the agent, conceding utility to others is not a goal in itself; i.e., one could argue the agent is not altruistic because it is not “motivated by a desire to benefit someone other than [itself] for that person’s sake” [16].
 
4
We assume this for the sake of simplicity and to avoid diverging from the core of the problem.
 
5
Norm emergence is a well-studied topic in the multi-agent systems community (see, e.g. Savarimuthu et al. [25]).
 
6
Setting the mode requires manual intervention by the agent operator.
 
9
It is noteworthy that in general, only one of the analyzed games includes an rejection by the agent.
 
11
We concede that the reward might be negligible for many participants.
 
12
There was no control if the amount of sweets resembled the performance in the game.
 
13
This question was added to the questionnaire after the first four participants had already absolved the study. I.e, four participants were not asked this question, including the two participants who had knowledge of the ultimatum game (\(n_{Q7 = 17}\)).
 
14
Data set and analysis code are available at http://​s.​cs.​umu.​se/​jo4bu3.
 
15
We choose the Mann–Whitney U test to avoid making assumptions regarding the distribution type of the game results and niceness score. However, considering the small sample size, strong, statistically significant results cannot be expected with any method.
 
16
See, e.g.: [5].
 
17
See the analysis notebook at http://​s.​cs.​umu.​se/​jo4bu3.
 
18
We consider the outlier detection and removal an interesting observation as part of the exploratory analysis that demonstrates the data set’s sensitivity to a single extreme case. We concede this approach to outlier exclusion should be avoided when claiming statistical significance. Also, a multivariate analysis of variance (MANOVA) with the game type as the independent and niceness, number of rejects, and number of coins received by the agent as dependent variables did not yield a significant result.
 
19
Note that the sentiment was interpreted and aggregated by the researchers, based on the qualitative answers.
 
20
See, for example: Bobadilla et al. [3].
 
Literatur
1.
Zurück zum Zitat Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef
2.
Zurück zum Zitat Bench-Capon, T., Atkinson, K., McBurney, P.: Altruism and agents: an argumentation based approach to designing agent decision mechanisms. In: Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, vol. 2, pp. 1073–1080. Richland (2009) Bench-Capon, T., Atkinson, K., McBurney, P.: Altruism and agents: an argumentation based approach to designing agent decision mechanisms. In: Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, vol. 2, pp. 1073–1080. Richland (2009)
3.
Zurück zum Zitat Bobadilla, J., Ortega, F., Hernando, A., Bernal, J.: A collaborative filtering approach to mitigate the new user cold start problem. Knowl.-Based Syst. 26, 225–238 (2012)CrossRef Bobadilla, J., Ortega, F., Hernando, A., Bernal, J.: A collaborative filtering approach to mitigate the new user cold start problem. Knowl.-Based Syst. 26, 225–238 (2012)CrossRef
4.
Zurück zum Zitat Bornstein, G., Yaniv, I.: Individual and group behavior in the ultimatum game: are groups more “rational” players? Exp. Econ. 1(1), 101–108 (1998)CrossRef Bornstein, G., Yaniv, I.: Individual and group behavior in the ultimatum game: are groups more “rational” players? Exp. Econ. 1(1), 101–108 (1998)CrossRef
5.
Zurück zum Zitat Campbell, M.J., Gardner, M.J.: Statistics in medicine: calculating confidence intervals for some non-parametric analyses. British Med. J. (Clin. Res. Ed.) 296(6634), 1454 (1988)CrossRef Campbell, M.J., Gardner, M.J.: Statistics in medicine: calculating confidence intervals for some non-parametric analyses. British Med. J. (Clin. Res. Ed.) 296(6634), 1454 (1988)CrossRef
6.
Zurück zum Zitat Chandrasekaran, A., Yadav, D., Chattopadhyay, P., Prabhu, V., Parikh, D.: It takes two to tango: towards theory of ai’s mind. arXiv preprint arXiv:1704.00717 (2017) Chandrasekaran, A., Yadav, D., Chattopadhyay, P., Prabhu, V., Parikh, D.: It takes two to tango: towards theory of ai’s mind. arXiv preprint arXiv:​1704.​00717 (2017)
7.
Zurück zum Zitat Core, M.G., Lane, H.C., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M.: Building explainable artificial intelligence systems. In: AAAI, pp. 1766–1773 (2006) Core, M.G., Lane, H.C., Van Lent, M., Gomboc, D., Solomon, S., Rosenberg, M.: Building explainable artificial intelligence systems. In: AAAI, pp. 1766–1773 (2006)
8.
Zurück zum Zitat Dautenhahn, K.: The art of designing socially intelligent agents: science, fiction, and the human in the loop. Appl. Artif. Intell. 12(7–8), 573–617 (1998)CrossRef Dautenhahn, K.: The art of designing socially intelligent agents: science, fiction, and the human in the loop. Appl. Artif. Intell. 12(7–8), 573–617 (1998)CrossRef
9.
Zurück zum Zitat Defense Advanced Research Projects Agency (DARPA): Broad agency announcement - explainable artificial intelligence (XAI). Technical report DARPA-BAA-16-53, Arlington, VA, USA (Aug 2016) Defense Advanced Research Projects Agency (DARPA): Broad agency announcement - explainable artificial intelligence (XAI). Technical report DARPA-BAA-16-53, Arlington, VA, USA (Aug 2016)
10.
11.
Zurück zum Zitat Fishburn, P.: Utility Theory for Decision Making. Publications in operations research. Wiley, Hoboken (1970)CrossRef Fishburn, P.: Utility Theory for Decision Making. Publications in operations research. Wiley, Hoboken (1970)CrossRef
12.
Zurück zum Zitat Güth, W., Schmittberger, R., Schwarze, B.: An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 3(4), 367–388 (1982)CrossRef Güth, W., Schmittberger, R., Schwarze, B.: An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 3(4), 367–388 (1982)CrossRef
13.
Zurück zum Zitat Harbers, M., Van den Bosch, K., Meyer, J.J.: Modeling agents with a theory of mind: Theory-theory versus simulation theory. Web Intell. Agent Syst. Int. J. 10(3), 331–343 (2012) Harbers, M., Van den Bosch, K., Meyer, J.J.: Modeling agents with a theory of mind: Theory-theory versus simulation theory. Web Intell. Agent Syst. Int. J. 10(3), 331–343 (2012)
14.
Zurück zum Zitat Kahneman, D.: Maps of bounded rationality: psychology for behavioral economics. Am. Econ. Rev. 93(5), 1449–1475 (2003)CrossRef Kahneman, D.: Maps of bounded rationality: psychology for behavioral economics. Am. Econ. Rev. 93(5), 1449–1475 (2003)CrossRef
15.
Zurück zum Zitat Kampik, T., Nieves, J.C., Lindgren, H.: Towards empathic autonomous gents. In: EMAS 2018 (2018) Kampik, T., Nieves, J.C., Lindgren, H.: Towards empathic autonomous gents. In: EMAS 2018 (2018)
16.
Zurück zum Zitat Kraut, R.: Altruism. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2018 edn (2018) Kraut, R.: Altruism. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2018 edn (2018)
17.
Zurück zum Zitat Kulkarni, A., Chakraborti, T., Zha, Y., Vadlamudi, S.G., Zhang, Y., Kambhampati, S.: Explicable robot planning as minimizing distance from expected behavior. CoRR, arXiv:11.05497 (2016) Kulkarni, A., Chakraborti, T., Zha, Y., Vadlamudi, S.G., Zhang, Y., Kambhampati, S.: Explicable robot planning as minimizing distance from expected behavior. CoRR, arXiv:​11.​05497 (2016)
18.
Zurück zum Zitat Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: AAAI, pp. 4762–4764 (2017) Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: AAAI, pp. 4762–4764 (2017)
19.
Zurück zum Zitat Leslie, A.M.: Pretense and representation: the origins of “theory of mind”. Psychol. Rev, 94(4), 412 (1987)CrossRef Leslie, A.M.: Pretense and representation: the origins of “theory of mind”. Psychol. Rev, 94(4), 412 (1987)CrossRef
20.
Zurück zum Zitat Melo, C.D., Marsella, S., Gratch, J.: People do not feel guilty about exploiting machines. ACM Trans. Comput.-Hum. Interact. 23(2), 8:1–8:17 (2016)CrossRef Melo, C.D., Marsella, S., Gratch, J.: People do not feel guilty about exploiting machines. ACM Trans. Comput.-Hum. Interact. 23(2), 8:1–8:17 (2016)CrossRef
21.
Zurück zum Zitat Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 36 (2017) Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 36 (2017)
22.
Zurück zum Zitat Parsons, S., Wooldridge, M.: Game theory and decision theory in multi-agent systems. Auton. Agents Multi-Agent Syst. 5(3), 243–254 (2002)CrossRef Parsons, S., Wooldridge, M.: Game theory and decision theory in multi-agent systems. Auton. Agents Multi-Agent Syst. 5(3), 243–254 (2002)CrossRef
23.
Zurück zum Zitat Rabinowitz, N., Perbet, F., Song, F., Zhang, C., Eslami, S.M.A., Botvinick, M.: Machine theory of mind. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, PMLR, vol. 80, pp. 4218–4227, Stockholmsmässan, Stockholm, 10–15 July 2018 Rabinowitz, N., Perbet, F., Song, F., Zhang, C., Eslami, S.M.A., Botvinick, M.: Machine theory of mind. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, PMLR, vol. 80, pp. 4218–4227, Stockholmsmässan, Stockholm, 10–15 July 2018
24.
Zurück zum Zitat Richardson, A., Rosenfeld, A.: A survey of interpretability and explainability in human-agent systems. In: XAI, p 137 (2018) Richardson, A., Rosenfeld, A.: A survey of interpretability and explainability in human-agent systems. In: XAI, p 137 (2018)
26.
Zurück zum Zitat Thaler, R.H.: Anomalies: the ultimatum game. J. Econ. Perspect. 2(4), 195–206 (1988)CrossRef Thaler, R.H.: Anomalies: the ultimatum game. J. Econ. Perspect. 2(4), 195–206 (1988)CrossRef
Metadaten
Titel
Explaining Sympathetic Actions of Rational Agents
verfasst von
Timotheus Kampik
Juan Carlos Nieves
Helena Lindgren
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-30391-4_4