Skip to main content

2021 | OriginalPaper | Buchkapitel

Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces

verfasst von : Michael Chromik, Andreas Butz

Erschienen in: Human-Computer Interaction – INTERACT 2021

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human understanding of black-box machine learning models through explanation-generating methods. Although the social sciences suggest that explanation is a social and iterative process between an explainer and an explainee, explanation user interfaces and their user interactions have not been systematically explored in XAI research yet. Therefore, we review prior XAI research containing explanation user interfaces for ML-based intelligent systems and describe different concepts of interaction. Further, we present observed design principles for interactive explanation user interfaces. With our work, we inform designers of XAI systems about human-centric ways to tailor their explanation user interfaces to different target audiences and use cases.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems. In: CHI 2018 (2018) Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems. In: CHI 2018 (2018)
2.
Zurück zum Zitat Abdul, A., von der Weth, C., Kankanhalli, M., Lim, B.Y.: COGAM: measuring and moderating cognitive load in ML model explanations. In: CHI 2020 (2020) Abdul, A., von der Weth, C., Kankanhalli, M., Lim, B.Y.: COGAM: measuring and moderating cognitive load in ML model explanations. In: CHI 2020 (2020)
3.
Zurück zum Zitat Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160(2018) Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160(2018)
4.
Zurück zum Zitat Alqaraawi, A., Schuessler, M., Weiss, P., Costanza, E., Berthouze, N.: Evaluating saliency map explanations for convolutional neural networks. In: IUI 2020 (2020) Alqaraawi, A., Schuessler, M., Weiss, P., Costanza, E., Berthouze, N.: Evaluating saliency map explanations for convolutional neural networks. In: IUI 2020 (2020)
5.
Zurück zum Zitat Amershi, S., et al.: Guidelines for human-AI interaction. In: CHI 2019 (2019) Amershi, S., et al.: Guidelines for human-AI interaction. In: CHI 2019 (2019)
6.
Zurück zum Zitat Andres, J., et al.: Introducing peripheral awareness as a neurological state for human-computer integration. In: CHI 2020 (2020) Andres, J., et al.: Introducing peripheral awareness as a neurological state for human-computer integration. In: CHI 2020 (2020)
7.
Zurück zum Zitat Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv (2019) Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv (2019)
8.
Zurück zum Zitat Barria-Pineda, J., Brusilovsky, P.: Explaining educational recommendations through a concept-level knowledge visualization. In: IUI 2019 (2019) Barria-Pineda, J., Brusilovsky, P.: Explaining educational recommendations through a concept-level knowledge visualization. In: IUI 2019 (2019)
9.
Zurück zum Zitat Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: It’s reducing a human being to a percentage. In: CHI 2018 (2018) Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: It’s reducing a human being to a percentage. In: CHI 2018 (2018)
10.
Zurück zum Zitat Bock, M., Schreiber, A.: Visualization of neural networks in virtual reality using unreal engine. In: VRST 2018 (2018) Bock, M., Schreiber, A.: Visualization of neural networks in virtual reality using unreal engine. In: VRST 2018 (2018)
11.
Zurück zum Zitat Bostandjiev, S., O’Donovan, J., Höllerer, T.: TasteWeights: a visual interactive hybrid recommender system. In: RecSys 2012 (2012) Bostandjiev, S., O’Donovan, J., Höllerer, T.: TasteWeights: a visual interactive hybrid recommender system. In: RecSys 2012 (2012)
12.
Zurück zum Zitat Buçinca, Z., Lin, P., Gajos, K.Z., Glassman, E.L.: Proxy tasks and subjective measures can be misleading in evaluating XAI systems. In: IUI 2020 (2020) Buçinca, Z., Lin, P., Gajos, K.Z., Glassman, E.L.: Proxy tasks and subjective measures can be misleading in evaluating XAI systems. In: IUI 2020 (2020)
13.
Zurück zum Zitat Bunt, A., Lount, M., Lauzon, C.: Are explanations always important? A study of deployed, low-cost intelligent interactive systems. In: IUI 2012 (2012) Bunt, A., Lount, M., Lauzon, C.: Are explanations always important? A study of deployed, low-cost intelligent interactive systems. In: IUI 2012 (2012)
14.
Zurück zum Zitat Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: IUI 2019 (2019) Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: IUI 2019 (2019)
15.
Zurück zum Zitat Chakraborti, T., Sreedharan, S., Grover, S., Kambhampati, S.: Plan explanations as model reconciliation: an empirical study. In: HRI 2019 (2019) Chakraborti, T., Sreedharan, S., Grover, S., Kambhampati, S.: Plan explanations as model reconciliation: an empirical study. In: HRI 2019 (2019)
16.
Zurück zum Zitat Chen, L.: Adaptive tradeoff explanations in conversational recommenders. In: RecSys 2009 (2009) Chen, L.: Adaptive tradeoff explanations in conversational recommenders. In: RecSys 2009 (2009)
17.
Zurück zum Zitat Chen, L., Wang, F.: Explaining recommendations based on feature sentiments in product reviews. In: IUI 2017 (2017) Chen, L., Wang, F.: Explaining recommendations based on feature sentiments in product reviews. In: IUI 2017 (2017)
18.
Zurück zum Zitat Cheng, H.F., et al.: Explaining decision-making algorithms through UI. In: CHI 2019 (2019) Cheng, H.F., et al.: Explaining decision-making algorithms through UI. In: CHI 2019 (2019)
19.
Zurück zum Zitat Chromik, M., Fincke, F., Butz, A.: Mind the (persuasion) gap: contrasting predictions of intelligent DSS with user beliefs. In: EICS 2020 Companion (2020) Chromik, M., Fincke, F., Butz, A.: Mind the (persuasion) gap: contrasting predictions of intelligent DSS with user beliefs. In: EICS 2020 Companion (2020)
20.
Zurück zum Zitat Cooper, A., Reimann, R., Cronin, D.: About Face 3: The Essentials of Interaction Design. Wiley, Hoboken (2007) Cooper, A., Reimann, R., Cronin, D.: About Face 3: The Essentials of Interaction Design. Wiley, Hoboken (2007)
21.
Zurück zum Zitat Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., Sen, P.: A survey of the state of explainable AI for natural language processing. arXiv (2020) Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., Sen, P.: A survey of the state of explainable AI for natural language processing. arXiv (2020)
22.
Zurück zum Zitat Das, D., Chernova, S.: Leveraging rationales to improve human task performance. In: IUI 2020 (2020) Das, D., Chernova, S.: Leveraging rationales to improve human task performance. In: IUI 2020 (2020)
23.
Zurück zum Zitat Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K.E., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: IUI 2019 (2019) Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K.E., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: IUI 2019 (2019)
24.
Zurück zum Zitat Dodge, J., Penney, S., Hilderbrand, C., Anderson, A., Burnett, M.: How the experts do it: assessing and explaining agent behaviors in real-time strategy games. In: CHI 2018 (2018) Dodge, J., Penney, S., Hilderbrand, C., Anderson, A., Burnett, M.: How the experts do it: assessing and explaining agent behaviors in real-time strategy games. In: CHI 2018 (2018)
25.
Zurück zum Zitat Dominguez, V., Messina, P., Donoso-Guzmán, I., Parra, D.: The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In: IUI 2019 (2019) Dominguez, V., Messina, P., Donoso-Guzmán, I., Parra, D.: The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In: IUI 2019 (2019)
26.
Zurück zum Zitat Donkers, T., Kleemann, T., Ziegler, J.: Explaining recommendations by means of aspect-based transparent memories. In: IUI 2020 (2020) Donkers, T., Kleemann, T., Ziegler, J.: Explaining recommendations by means of aspect-based transparent memories. In: IUI 2020 (2020)
27.
Zurück zum Zitat Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv (2017) Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv (2017)
28.
Zurück zum Zitat Douglas, N., Yim, D., Kartal, B., Hernandez-Leal, P., Maurer, F., Taylor, M.E.: Towers of saliency: a reinforcement learning visualization using immersive environments. In: ISS 2019 (2019) Douglas, N., Yim, D., Kartal, B., Hernandez-Leal, P., Maurer, F., Taylor, M.E.: Towers of saliency: a reinforcement learning visualization using immersive environments. In: ISS 2019 (2019)
29.
Zurück zum Zitat Dudley, J.J., Kristensson, P.O.: A review of user interface design for interactive machine learning. ACM Trans. Interact. Intell, Syst (2018)CrossRef Dudley, J.J., Kristensson, P.O.: A review of user interface design for interactive machine learning. ACM Trans. Interact. Intell, Syst (2018)CrossRef
30.
Zurück zum Zitat Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: IUI 2019 (2019) Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: IUI 2019 (2019)
31.
Zurück zum Zitat Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: the impact of placebic explanations on trust in intelligent systems. In: CHI EA 2019 (2019) Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: the impact of placebic explanations on trust in intelligent systems. In: CHI EA 2019 (2019)
32.
Zurück zum Zitat Feng, S., Boyd-Graber, J.: What can AI do for me? Evaluating machine learning interpretations in cooperative play. In: IUI 2019 (2019) Feng, S., Boyd-Graber, J.: What can AI do for me? Evaluating machine learning interpretations in cooperative play. In: IUI 2019 (2019)
34.
Zurück zum Zitat Fulton, L.B., Lee, J.Y., Wang, Q., Yuan, Z., Hammer, J., Perer, A.: Getting playful with explainable AI. In: CHI EA 2020 (2020) Fulton, L.B., Lee, J.Y., Wang, Q., Yuan, Z., Hammer, J., Perer, A.: Getting playful with explainable AI. In: CHI EA 2020 (2020)
35.
Zurück zum Zitat Garg, N., Schiebinger, L., Jurafsky, D., Zou, J.: Word embeddings quantify 100 years of gender and ethnic stereotypes. Proc. Natl Acad. Sci. 115(16), E3635–E3644 (2018) Garg, N., Schiebinger, L., Jurafsky, D., Zou, J.: Word embeddings quantify 100 years of gender and ethnic stereotypes. Proc. Natl Acad. Sci. 115(16), E3635–E3644 (2018)
36.
Zurück zum Zitat Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Surv. 51, 1–42 (2018) Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Surv. 51, 1–42 (2018)
37.
Zurück zum Zitat Gunning, D.: DARPA’s XAI program. In: IUI 2019 (2019) Gunning, D.: DARPA’s XAI program. In: IUI 2019 (2019)
38.
Zurück zum Zitat Guzdial, M., et al.: Friend, collaborator, student, manager: how design of an AI-driven game level editor affects creators. In: CHI 2019 (2019) Guzdial, M., et al.: Friend, collaborator, student, manager: how design of an AI-driven game level editor affects creators. In: CHI 2019 (2019)
39.
Zurück zum Zitat Hastie, H., Chiyah Garcia, F.J., Robb, D.A., Laskov, A., Patron, P.: MIRIAM: a multimodal interface for explaining the reasoning behind actions of remote autonomous systems. In: ICMI 2018 (2018) Hastie, H., Chiyah Garcia, F.J., Robb, D.A., Laskov, A., Patron, P.: MIRIAM: a multimodal interface for explaining the reasoning behind actions of remote autonomous systems. In: ICMI 2018 (2018)
40.
Zurück zum Zitat Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: CSCW 2000 (2000) Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: CSCW 2000 (2000)
41.
Zurück zum Zitat Hohman, F., Head, A., Caruana, R., DeLine, R., Drucker, S.M.: Gamut: a design probe to understand how data scientists understand machine learning models. In: CHI 2019 (2019) Hohman, F., Head, A., Caruana, R., DeLine, R., Drucker, S.M.: Gamut: a design probe to understand how data scientists understand machine learning models. In: CHI 2019 (2019)
42.
Zurück zum Zitat Hornbaek, K., Oulasvirta, A.: What is interaction? In: CHI 2017 (2017) Hornbaek, K., Oulasvirta, A.: What is interaction? In: CHI 2017 (2017)
43.
Zurück zum Zitat Horvitz, E.: Principles of mixed-initiative user interfaces. In: CHI 1999 (1999) Horvitz, E.: Principles of mixed-initiative user interfaces. In: CHI 1999 (1999)
44.
Zurück zum Zitat Ishibashi, T., Nakao, Y., Sugano, Y.: Investigating audio data visualization for interactive sound recognition. In: IUI 2020 (2020) Ishibashi, T., Nakao, Y., Sugano, Y.: Investigating audio data visualization for interactive sound recognition. In: IUI 2020 (2020)
45.
Zurück zum Zitat Kaur, H., et al.: Interpreting interpretability. In: CHI 2020 (2020) Kaur, H., et al.: Interpreting interpretability. In: CHI 2020 (2020)
46.
Zurück zum Zitat Kim, D.H., Hoque, E., Agrawala, M.: Answering questions about charts and generating visual explanations. In: CHI 2020 (2020) Kim, D.H., Hoque, E., Agrawala, M.: Answering questions about charts and generating visual explanations. In: CHI 2020 (2020)
47.
Zurück zum Zitat Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (2007) Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (2007)
48.
Zurück zum Zitat Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: RecSys 2018 (2018) Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: RecSys 2018 (2018)
49.
Zurück zum Zitat Knijnenburg, B.P., Bostandjiev, S., O’Donovan, J., Kobsa, A.: Inspectability and control in social recommenders. In: RecSys 2012 (2012) Knijnenburg, B.P., Bostandjiev, S., O’Donovan, J., Kobsa, A.: Inspectability and control in social recommenders. In: RecSys 2012 (2012)
50.
Zurück zum Zitat Kocaballi, A.B., Coiera, E., Berkovsky, S.: Revisiting habitability in conversational systems. In: CHI EA 2020 (2020) Kocaballi, A.B., Coiera, E., Berkovsky, S.: Revisiting habitability in conversational systems. In: CHI EA 2020 (2020)
51.
Zurück zum Zitat Koch, J., Lucero, A., Hegemann, L., Oulasvirta, A.: May AI? Design ideation with cooperative contextual bandits. In: CHI 2019 (2019) Koch, J., Lucero, A., Hegemann, L., Oulasvirta, A.: May AI? Design ideation with cooperative contextual bandits. In: CHI 2019 (2019)
52.
Zurück zum Zitat Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: CHI 2016 (2016) Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: CHI 2016 (2016)
53.
Zurück zum Zitat Law, E.L.C., Roto, V., Hassenzahl, M., Vermeeren, A.P.O.S., Kort, J.: Understanding, scoping and defining user experience. In: CHI 2009 (2009) Law, E.L.C., Roto, V., Hassenzahl, M., Vermeeren, A.P.O.S., Kort, J.: Understanding, scoping and defining user experience. In: CHI 2009 (2009)
54.
Zurück zum Zitat Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: CHI 2020 (2020) Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: CHI 2020 (2020)
55.
Zurück zum Zitat Licklider, J.: Man-computer symbiosis. IRE Trans. Hum. Factors Electron. 1, 4–11 (1960) Licklider, J.: Man-computer symbiosis. IRE Trans. Hum. Factors Electron. 1, 4–11 (1960)
56.
Zurück zum Zitat Lim, B.Y., Dey, A.K.: Weights of evidence for intelligible smart environments. In: UbiComp 2012 (2012) Lim, B.Y., Dey, A.K.: Weights of evidence for intelligible smart environments. In: UbiComp 2012 (2012)
57.
Zurück zum Zitat Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods (2020)
58.
Zurück zum Zitat Ludwig, J., Geiselman, E.: Intelligent pairing assistant for air operation centers. In: IUI 2012 (2012) Ludwig, J., Geiselman, E.: Intelligent pairing assistant for air operation centers. In: IUI 2012 (2012)
59.
Zurück zum Zitat Mai, T., et al.: Keeping it “Organized and Logical”. In: IUI 2020 (2020) Mai, T., et al.: Keeping it “Organized and Logical”. In: IUI 2020 (2020)
60.
Zurück zum Zitat Mikhail, M., Roegiest, A., Anello, K., Wei, W.: Dancing with the AI devil: investigating the partnership between lawyers and AI. In: CHIIR 2020 (2020) Mikhail, M., Roegiest, A., Anello, K., Wei, W.: Dancing with the AI devil: investigating the partnership between lawyers and AI. In: CHIIR 2020 (2020)
61.
Zurück zum Zitat Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To Explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI 2019 (2019) Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To Explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI 2019 (2019)
62.
Zurück zum Zitat Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267 1–38 (2019) Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267 1–38 (2019)
63.
Zurück zum Zitat Misztal-Radecka, J., Indurkhya, B.: Persona prototypes for improving the qualitative evaluation of recommendation systems. In: UMAP 2020 Adjunct (2020) Misztal-Radecka, J., Indurkhya, B.: Persona prototypes for improving the qualitative evaluation of recommendation systems. In: UMAP 2020 Adjunct (2020)
64.
Zurück zum Zitat Moore, J.D., Paris, C.: Requirements for an expert system explanation facility. Comput. Intell. 7, 367–370 (1991) Moore, J.D., Paris, C.: Requirements for an expert system explanation facility. Comput. Intell. 7, 367–370 (1991)
65.
Zurück zum Zitat Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein Macrocognition, G.: Explanation in human-AI systems. arXiv (2019) Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein Macrocognition, G.: Explanation in human-AI systems. arXiv (2019)
66.
Zurück zum Zitat Muhammad, K.I., Lawlor, A., Smyth, B.: A live-user study of opinionated explanations for recommender systems. In: IUI 2016 (2016) Muhammad, K.I., Lawlor, A., Smyth, B.: A live-user study of opinionated explanations for recommender systems. In: IUI 2016 (2016)
67.
Zurück zum Zitat Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl Acad. Sci. 116, 22071–22080 (2019) Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl Acad. Sci. 116, 22071–22080 (2019)
68.
Zurück zum Zitat Musto, C., Lops, P., de Gemmis, M., Semeraro, G.: Justifying recommendations through aspect-based sentiment analysis of users reviews. In: UMAP 2019 (2019) Musto, C., Lops, P., de Gemmis, M., Semeraro, G.: Justifying recommendations through aspect-based sentiment analysis of users reviews. In: UMAP 2019 (2019)
69.
Zurück zum Zitat Norman, D., Draper, S.: User Centered System Design. New Perspectives on Human-Computer Interaction (1986) Norman, D., Draper, S.: User Centered System Design. New Perspectives on Human-Computer Interaction (1986)
70.
Zurück zum Zitat Nourani, M., et al.: Investigating the importance of first impressions and explainable AI with interactive video analysis. In: CHI EA 2020 (2020) Nourani, M., et al.: Investigating the importance of first impressions and explainable AI with interactive video analysis. In: CHI EA 2020 (2020)
71.
Zurück zum Zitat O’Donovan, J., Smyth, B., Gretarsson, B., Bostandjiev, S., Höllerer, T.: PeerChooser: visual interactive recommendation. In: CHI 2008 (2008) O’Donovan, J., Smyth, B., Gretarsson, B., Bostandjiev, S., Höllerer, T.: PeerChooser: visual interactive recommendation. In: CHI 2008 (2008)
72.
Zurück zum Zitat Oh, C., et al.: Understanding how people reason about aesthetic evaluations of AI. In: DIS 2020 (2020) Oh, C., et al.: Understanding how people reason about aesthetic evaluations of AI. In: DIS 2020 (2020)
73.
Zurück zum Zitat Oulasvirta, A., Hornbaek, K.: HCI research as problem-solving. In: CHI 2016 (2016) Oulasvirta, A., Hornbaek, K.: HCI research as problem-solving. In: CHI 2016 (2016)
74.
Zurück zum Zitat Páez, A.: The pragmatic turn in explainable artificial intelligence (XAI). Minds Mach. 29, 441–459 (2019) Páez, A.: The pragmatic turn in explainable artificial intelligence (XAI). Minds Mach. 29, 441–459 (2019)
75.
Zurück zum Zitat Patel, K., Bancroft, N., Drucker, S.M., Fogarty, J., Ko, A.J., Landay, J.: Gestalt: integrated support for implementation and analysis in ML. In: UIST 2010 (2010) Patel, K., Bancroft, N., Drucker, S.M., Fogarty, J., Ko, A.J., Landay, J.: Gestalt: integrated support for implementation and analysis in ML. In: UIST 2010 (2010)
76.
Zurück zum Zitat Paudyal, P., Banerjee, A., Gupta, S.: On evaluating the effects of feedback for sign language learning using explainable AI. In: IUI 2020 (2020) Paudyal, P., Banerjee, A., Gupta, S.: On evaluating the effects of feedback for sign language learning using explainable AI. In: IUI 2020 (2020)
77.
Zurück zum Zitat Pilling, F., Akmal, H., Coulton, P., Lindley, J.: The process of gaining an AI legibility mark. In: CHI EA 2020 (2020) Pilling, F., Akmal, H., Coulton, P., Lindley, J.: The process of gaining an AI legibility mark. In: CHI EA 2020 (2020)
78.
Zurück zum Zitat Poltrock, S.E., Steiner, D.D., Tarlton, P.N.: Graphic interfaces for knowledge-based system development (1986) Poltrock, S.E., Steiner, D.D., Tarlton, P.N.: Graphic interfaces for knowledge-based system development (1986)
79.
Zurück zum Zitat Pu, P., Chen, L.: Trust building with explanation interfaces. In: IUI 2006 (2006) Pu, P., Chen, L.: Trust building with explanation interfaces. In: IUI 2006 (2006)
80.
Zurück zum Zitat Robb, D.A., et al.: Exploring interaction with remote autonomous systems using conversational agents. In: DIS 2019 (2019) Robb, D.A., et al.: Exploring interaction with remote autonomous systems using conversational agents. In: DIS 2019 (2019)
81.
Zurück zum Zitat Schaekermann, M., Beaton, G., Sanoubari, E., Lim, A., Larson, K., Law, E.: Ambiguity-aware AI assistants for medical data analysis. In: CHI 2020 (2020) Schaekermann, M., Beaton, G., Sanoubari, E., Lim, A., Larson, K., Law, E.: Ambiguity-aware AI assistants for medical data analysis. In: CHI 2020 (2020)
82.
Zurück zum Zitat Schneeberger, T., Gebhard, P., Baur, T., André, E.: PARLEY: a transparent virtual social agent training interface. In: IUI, 2019 (2019) Schneeberger, T., Gebhard, P., Baur, T., André, E.: PARLEY: a transparent virtual social agent training interface. In: IUI, 2019 (2019)
83.
Zurück zum Zitat Schuessler, M., Weiß, P.: Minimalistic explanations: capturing the essence of decisions. In: CHI EA 2019 (2019) Schuessler, M., Weiß, P.: Minimalistic explanations: capturing the essence of decisions. In: CHI EA 2019 (2019)
84.
Zurück zum Zitat Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 5, 3–55 (1948) Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 5, 3–55 (1948)
85.
Zurück zum Zitat Shneiderman, B.: Bridging the gap between ethics and practice. ACM Trans. Interact. Intell. Syst. 10, 1–31 (2020) Shneiderman, B.: Bridging the gap between ethics and practice. ACM Trans. Interact. Intell. Syst. 10, 1–31 (2020)
86.
Zurück zum Zitat Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., Diakopoulos, N.: Confessions: grand challenges for HCI researchers. Interactions (2016) Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., Diakopoulos, N.: Confessions: grand challenges for HCI researchers. Interactions (2016)
87.
Zurück zum Zitat Simon, H.A.: Models of Bounded Rationality: Empirically Grounded Economic Reason, vol. 3. MIT Press, Cambridge (1997) Simon, H.A.: Models of Bounded Rationality: Empirically Grounded Economic Reason, vol. 3. MIT Press, Cambridge (1997)
88.
Zurück zum Zitat Sklar, E.I., Azhar, M.Q.: Explanation through Argumentation. In: HAI 2018 (2018) Sklar, E.I., Azhar, M.Q.: Explanation through Argumentation. In: HAI 2018 (2018)
89.
Zurück zum Zitat Springer, A., Whittaker, S.: Progressive disclosure. ACM Trans. Interact. Intell. Syst. (2020) Springer, A., Whittaker, S.: Progressive disclosure. ACM Trans. Interact. Intell. Syst. (2020)
90.
Zurück zum Zitat Stolterman, E., Wiltse, H., Chen, S., Lewandowski, V., Pak, L.: Analyzing artifact interaction complexity (2012) Stolterman, E., Wiltse, H., Chen, S., Lewandowski, V., Pak, L.: Analyzing artifact interaction complexity (2012)
91.
Zurück zum Zitat Tabrez, A., Agrawal, S., Hayes, B.: Explanation-based reward coaching to improve human performance via reinforcement learning. In: HRI 2019 (2019) Tabrez, A., Agrawal, S., Hayes, B.: Explanation-based reward coaching to improve human performance via reinforcement learning. In: HRI 2019 (2019)
92.
Zurück zum Zitat Tintarev, N.: Explanations of recommendations. In: RecSys 2007 (2007) Tintarev, N.: Explanations of recommendations. In: RecSys 2007 (2007)
93.
Zurück zum Zitat Tsai, C.H., Brusilovsky, P.: Evaluating visual explanations for similarity-based recommendations: user perception and performance. In: UMAP 2019 (2019) Tsai, C.H., Brusilovsky, P.: Evaluating visual explanations for similarity-based recommendations: user perception and performance. In: UMAP 2019 (2019)
94.
Zurück zum Zitat Vig, J., Sen, S., Riedl, J.: Tagsplanations: explaining recommendations using tags. In: IUI 2009 (2009) Vig, J., Sen, S., Riedl, J.: Tagsplanations: explaining recommendations using tags. In: IUI 2009 (2009)
95.
Zurück zum Zitat Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. arXiv (2020) Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. arXiv (2020)
96.
Zurück zum Zitat Wang, D., et al.: From human-human collaboration to human-AI collaboration. In: CHI EA 2020 (2020) Wang, D., et al.: From human-human collaboration to human-AI collaboration. In: CHI EA 2020 (2020)
97.
Zurück zum Zitat Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: CHI 2019 (2019) Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: CHI 2019 (2019)
98.
Zurück zum Zitat Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: HRI 2016 (2016) Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: HRI 2016 (2016)
99.
Zurück zum Zitat Weisz, J.D., Jain, M., Joshi, N.N., Johnson, J., Lange, I.: BigBlueBot: teaching strategies for successful human-agent interactions. In: IUI 2019 (2019) Weisz, J.D., Jain, M., Joshi, N.N., Johnson, J., Lange, I.: BigBlueBot: teaching strategies for successful human-agent interactions. In: IUI 2019 (2019)
100.
Zurück zum Zitat Wenskovitch, J., Dowling, M., North, C.: With respect to What? Simultaneous interaction with dimension reduction and clustering projections. In: IUI 2020 (2020) Wenskovitch, J., Dowling, M., North, C.: With respect to What? Simultaneous interaction with dimension reduction and clustering projections. In: IUI 2020 (2020)
101.
Zurück zum Zitat Wiegand, G., Schmidmaier, M., Weber, T., Liu, Y., Hussmann, H.: I drive - you trust: explaining driving behavior of autonomous cars. In: CHI EA 2019 (2019) Wiegand, G., Schmidmaier, M., Weber, T., Liu, Y., Hussmann, H.: I drive - you trust: explaining driving behavior of autonomous cars. In: CHI EA 2019 (2019)
102.
Zurück zum Zitat Wolf, C.T.: Explainability scenarios: towards scenario-based XAI design. In: IUI 2019 (2019) Wolf, C.T.: Explainability scenarios: towards scenario-based XAI design. In: IUI 2019 (2019)
103.
Zurück zum Zitat Xie, J., Myers, C.M., Zhu, J.: Interactive visualizer to facilitate game designers in understanding machine learning. In: CHI EA 2019 (2019) Xie, J., Myers, C.M., Zhu, J.: Interactive visualizer to facilitate game designers in understanding machine learning. In: CHI EA 2019 (2019)
104.
Zurück zum Zitat Xie, Y., Chen, M., Kao, D., Gao, G., Chen, X.A.: CheXplain: enabling physicians to explore and understand data-driven medical imaging analysis. In: CHI 2020 (2020) Xie, Y., Chen, M., Kao, D., Gao, G., Chen, X.A.: CheXplain: enabling physicians to explore and understand data-driven medical imaging analysis. In: CHI 2020 (2020)
105.
Zurück zum Zitat Xu, W.: Toward human-centered AI: a perspective from human-computer interaction. Interactions (2019) Xu, W.: Toward human-centered AI: a perspective from human-computer interaction. Interactions (2019)
106.
Zurück zum Zitat Yang, F., Huang, Z., Scholtz, J., Arendt, D.L.: How do visual explanations foster end users’ appropriate trust in machine learning? In: IUI 2020 (2020) Yang, F., Huang, Z., Scholtz, J., Arendt, D.L.: How do visual explanations foster end users’ appropriate trust in machine learning? In: IUI 2020 (2020)
107.
Zurück zum Zitat Yin, M., Wortman Vaughan, J., Wallach, H.: Understanding the effect of accuracy on trust in machine learning models. In: CHI 2019 (2019) Yin, M., Wortman Vaughan, J., Wallach, H.: Understanding the effect of accuracy on trust in machine learning models. In: CHI 2019 (2019)
108.
Zurück zum Zitat Yu, B., Yuan, Y., Terveen, L., Wu, Z.S., Forlizzi, J., Zhu, H.: Keeping designers in the loop: communicating inherent algorithmic trade-offs across multiple objectives. In: DIS 2020 (2020) Yu, B., Yuan, Y., Terveen, L., Wu, Z.S., Forlizzi, J., Zhu, H.: Keeping designers in the loop: communicating inherent algorithmic trade-offs across multiple objectives. In: DIS 2020 (2020)
109.
Zurück zum Zitat Zanker, M.: The Influence of knowledgeable explanations on users’ perception of a recommender system. In: RecSys 2012 (2012) Zanker, M.: The Influence of knowledgeable explanations on users’ perception of a recommender system. In: RecSys 2012 (2012)
Metadaten
Titel
Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces
verfasst von
Michael Chromik
Andreas Butz
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-85616-8_36

Neuer Inhalt