Skip to main content

2018 | OriginalPaper | Buchkapitel

Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance

verfasst von : Mark A. Neerincx, Jasper van der Waa, Frank Kaptein, Jurriaan van Diggelen

Erschienen in: Engineering Psychology and Cognitive Ergonomics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Most explainable AI (XAI) research projects focus on well-delineated topics, such as interpretability of machine learning outcomes, knowledge sharing in a multi-agent system or human trust in agent’s performance. For the development of explanations in human-agent teams, a more integrative approach is needed. This paper proposes a perceptual-cognitive explanation (PeCoX) framework for the development of explanations that address both the perceptual and cognitive foundations of an agent’s behavior, distinguishing between explanation generation, communication and reception. It is a generic framework (i.e., the core is domain-agnostic and the perceptual layer is model-agnostic), and being developed and tested in the domains of transport, health-care and defense. The perceptual level entails the provision of an Intuitive Confidence Measure and the identification of the “foil” in a contrastive explanation. The cognitive level entails the selection of the beliefs, goals and emotions for explanations. Ontology Design Patterns are being constructed for the reasoning and communication, whereas Interaction Design Patterns are being constructed for the shaping of the multimodal communication. First results show (1) positive effects on human’s understanding of the perceptual and cognitive foundation of agent’s behavior, and (2) the need for harmonizing the explanations to the context and human’s information processing capabilities.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
Literatur
2.
Zurück zum Zitat Beller, J., Heesen, M., Vollrath, M.: Improving the driver–automation interaction: an approach using automation uncertainty. Hum. Factors 55(6), 1130–1141 (2013)CrossRef Beller, J., Heesen, M., Vollrath, M.: Improving the driver–automation interaction: an approach using automation uncertainty. Hum. Factors 55(6), 1130–1141 (2013)CrossRef
4.
Zurück zum Zitat Churchland, P.M.: Folk psychology and the explanation of human behavior. In: Greenwood, J. (ed.) The Future of Folk Psychology: Intentionality and Cognitive Science. Cambridge University Press, Cambridge (1991) Churchland, P.M.: Folk psychology and the explanation of human behavior. In: Greenwood, J. (ed.) The Future of Folk Psychology: Intentionality and Cognitive Science. Cambridge University Press, Cambridge (1991)
5.
Zurück zum Zitat Dennett, D.C.: Three kinds of intentional psychology. In: Healey, R. (ed.) Reduction, Time and Reality. Cambridge University Press, Cambridge (1981) Dennett, D.C.: Three kinds of intentional psychology. In: Healey, R. (ed.) Reduction, Time and Reality. Cambridge University Press, Cambridge (1981)
7.
Zurück zum Zitat Döring, S.A.: Explaining action by emotion. Philos. Q. 53, 214–230 (2003)CrossRef Döring, S.A.: Explaining action by emotion. Philos. Q. 53, 214–230 (2003)CrossRef
9.
Zurück zum Zitat Harbers, M., Broekens, J., van den Bosch, K., Meyer, J.J.: Guidelines for developing explainable cognitive models. In: Proceedings of ICCM, pp. 85–90, January 2010 Harbers, M., Broekens, J., van den Bosch, K., Meyer, J.J.: Guidelines for developing explainable cognitive models. In: Proceedings of ICCM, pp. 85–90, January 2010
10.
Zurück zum Zitat Harbers, M., van den Bosch, K., Meyer, J.J.: Design and evaluation of explainable BDI agents. In: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), vol. 2, pp. 125–132. IEEE (2010) Harbers, M., van den Bosch, K., Meyer, J.J.: Design and evaluation of explainable BDI agents. In: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), vol. 2, pp. 125–132. IEEE (2010)
11.
Zurück zum Zitat Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–312. ACM (2017) Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–312. ACM (2017)
12.
Zurück zum Zitat Haynes, S.R., Cohen, M.A., Ritter, F.E.: Designs for explaining intelligent agents. Int. J. Hum Comput Stud. 67(1), 90–110 (2009)CrossRef Haynes, S.R., Cohen, M.A., Ritter, F.E.: Designs for explaining intelligent agents. Int. J. Hum Comput Stud. 67(1), 90–110 (2009)CrossRef
13.
14.
Zurück zum Zitat Kaptein, F., Broekens, D.J., Hindriks, K.V., Neerincx, M.A.: Personalised self-explanation by robots: the role of goals versus beliefs in robot-action explanation for children and adults. In: RO-MAN 2017 (2017) Kaptein, F., Broekens, D.J., Hindriks, K.V., Neerincx, M.A.: Personalised self-explanation by robots: the role of goals versus beliefs in robot-action explanation for children and adults. In: RO-MAN 2017 (2017)
15.
Zurück zum Zitat Kaptein, F., Broekens, D.J., Hindriks, K.V., Neerincx, M.A.: The role of emotion in self-explanation by cognitive agents. In: DFAI Workshop at ACII 2017 (2017) Kaptein, F., Broekens, D.J., Hindriks, K.V., Neerincx, M.A.: The role of emotion in self-explanation by cognitive agents. In: DFAI Workshop at ACII 2017 (2017)
16.
Zurück zum Zitat Keil, F.C.: Explanation and understanding. Annu. Rev. Psychol. 57, 227–254 (2006)CrossRef Keil, F.C.: Explanation and understanding. Annu. Rev. Psychol. 57, 227–254 (2006)CrossRef
17.
Zurück zum Zitat Klein, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004)CrossRef Klein, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004)CrossRef
18.
Zurück zum Zitat Lohani, M., Stokes, C., Dashan, N., McCoy, M., Bailey, C.A., Rivers, S.E.: A framework for human-agent social systems: the role of non-technical factors in operation success. In: Savage-Knepshield, P., Chen, J. (eds.) Advances in Human Factors in Robots and Unmanned Systems. AISC, vol. 499, pp. 137–148. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-41959-6_12CrossRef Lohani, M., Stokes, C., Dashan, N., McCoy, M., Bailey, C.A., Rivers, S.E.: A framework for human-agent social systems: the role of non-technical factors in operation success. In: Savage-Knepshield, P., Chen, J. (eds.) Advances in Human Factors in Robots and Unmanned Systems. AISC, vol. 499, pp. 137–148. Springer, Cham (2017). https://​doi.​org/​10.​1007/​978-3-319-41959-6_​12CrossRef
19.
Zurück zum Zitat Malle, B.F.: How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. MIT Press, Cambridge (2004) Malle, B.F.: How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. MIT Press, Cambridge (2004)
20.
21.
Zurück zum Zitat Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum. In: IJCAI 2017 Workshop on Explainable AI (XAI), p. 36 (2017) Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum. In: IJCAI 2017 Workshop on Explainable AI (XAI), p. 36 (2017)
22.
Zurück zum Zitat Narayanan, M., Chen, E., He, J., Kim, B., Gersham, S., Doshi-Velez, F.: How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv preprint (2018). arXiv:1802.00682v1 Narayanan, M., Chen, E., He, J., Kim, B., Gersham, S., Doshi-Velez, F.: How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv preprint (2018). arXiv:​1802.​00682v1
24.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, August 2016 Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM, August 2016
25.
Zurück zum Zitat Scheutz, M., DeLoach, S.A., Adams, J.A.: A framework for developing and using shared mental models in human-agent teams. J. Cogn. Eng. Decis. Making 11, 203–224 (2017)CrossRef Scheutz, M., DeLoach, S.A., Adams, J.A.: A framework for developing and using shared mental models in human-agent teams. J. Cogn. Eng. Decis. Making 11, 203–224 (2017)CrossRef
27.
Zurück zum Zitat Su, X., Matskin, M., Rao, J.: Implementing explanation ontology for agent system. In: 2003 Proceedings IEEE/WIC International Conference on Web Intelligence, WI 2003, pp. 330–336. IEEE (2003) Su, X., Matskin, M., Rao, J.: Implementing explanation ontology for agent system. In: 2003 Proceedings IEEE/WIC International Conference on Web Intelligence, WI 2003, pp. 330–336. IEEE (2003)
28.
Zurück zum Zitat Taylor, G., Knudsen, K., Holt, L.S.: Explaining agent behavior. Ann Arbor 1001, 48105 (2006) Taylor, G., Knudsen, K., Holt, L.S.: Explaining agent behavior. Ann Arbor 1001, 48105 (2006)
29.
Zurück zum Zitat Tiddi, I., d’Aquin, M., Motta, E.: An ontology design pattern to define explanations. In: Proceedings of the 8th International Conference on Knowledge Capture, 8 p. ACM (2015) Tiddi, I., d’Aquin, M., Motta, E.: An ontology design pattern to define explanations. In: Proceedings of the 8th International Conference on Knowledge Capture, 8 p. ACM (2015)
30.
Zurück zum Zitat van der Waa, J., van Diggelen, J., Neerincx, M.A., Raaijmakers, S.: ICM: an intuitive, model independent and accurate certainty measure for machine learning. In: 10th International Conference on Agents and AI (ICAART 2018) (2018) van der Waa, J., van Diggelen, J., Neerincx, M.A., Raaijmakers, S.: ICM: an intuitive, model independent and accurate certainty measure for machine learning. In: 10th International Conference on Agents and AI (ICAART 2018) (2018)
31.
Zurück zum Zitat van der Waa, J., van Diggelen, J., Neerincx, M.A.: The design and validation of an intuitive certainty measure. In: Proceedings of IUI 2018 Workshop on Explainable Smart Systems (2018) van der Waa, J., van Diggelen, J., Neerincx, M.A.: The design and validation of an intuitive certainty measure. In: Proceedings of IUI 2018 Workshop on Explainable Smart Systems (2018)
32.
Zurück zum Zitat van der Waa, J., Robeer, M.J., van Diggelen, J., Brinkhuis, M.J.S., Neerincx, M.A.: Contrastive explanation for machine learning in adaptive learning (in preparation) van der Waa, J., Robeer, M.J., van Diggelen, J., Brinkhuis, M.J.S., Neerincx, M.A.: Contrastive explanation for machine learning in adaptive learning (in preparation)
33.
Zurück zum Zitat Wang, W.: Self-management support system for renal transplant patients: understanding adherence and acceptance. Ph.D. thesis. Delft University of Technology, The Netherlands (2017) Wang, W.: Self-management support system for renal transplant patients: understanding adherence and acceptance. Ph.D. thesis. Delft University of Technology, The Netherlands (2017)
34.
Zurück zum Zitat van Welie, M., van der Veer, G.C.: Pattern languages in interaction design: structure and organization. In: Proceedings of Interact 2003, 1–5 September, Zürich, Switzerland, pp. 527–534, IOS Press, Amsterdam (2003) van Welie, M., van der Veer, G.C.: Pattern languages in interaction design: structure and organization. In: Proceedings of Interact 2003, 1–5 September, Zürich, Switzerland, pp. 527–534, IOS Press, Amsterdam (2003)
Metadaten
Titel
Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance
verfasst von
Mark A. Neerincx
Jasper van der Waa
Frank Kaptein
Jurriaan van Diggelen
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-91122-9_18