Skip to main content
Erschienen in: AI & SOCIETY 2/2023

26.07.2022 | Open Forum

An explanation space to align user studies with the technical development of Explainable AI

verfasst von: Garrick Cabour, Andrés Morales-Forero, Élise Ledoux, Samuel Bassetto

Erschienen in: AI & SOCIETY | Ausgabe 2/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental models, (2) the end-users’ cognitive process, (3) the user interface, (4) the Human-Explainer Agent, and (5) the agent process. We first define each component of the architecture. Then, we present the Abstracted Explanation Space, a modeling tool that aggregates the architecture’s components to support designers in systematically aligning explanations with end-users’ work practices, needs, and goals. It guides the specifications of what needs to be explained (content: end-users’ mental model), why this explanation is necessary (context: end-users’ cognitive process), to delimit how to explain it (format: Human-Explainer Agent and user interface), and when the explanations should be given. We then exemplify the tool’s use in an ongoing case study in the aircraft maintenance domain. Finally, we discuss possible contributions of the tool, known limitations or areas for improvement, and future work to be done.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
https://aiir.nl/
 
3
In this article, the word end-user refers to the actors who interact directly with the technical device.
 
4
A user’s understanding and representation of a process, a phenomenon, or a system (Hoffman et al., 2018).
 
5
See Hoffman et al. (2018), Wilson & Sharples (2015), and Milton (2007) for a complete description of existing data collection methods and techniques.
 
7
This simplified process does not list the upstream tasks related to the preparation of the workstation, or the downstream operations after the response.
 
Literatur
Zurück zum Zitat Akatsuka J, Yamamoto Y, Sekine T, Numata Y, Morikawa H, Tsutsumi K, Yanagi M, Endo Y, Takeda H, Hayashi T (2019) Illuminating clues of cancer buried in prostate mr image: deep learning and expert approaches. Biomolecules 9(11):673CrossRef Akatsuka J, Yamamoto Y, Sekine T, Numata Y, Morikawa H, Tsutsumi K, Yanagi M, Endo Y, Takeda H, Hayashi T (2019) Illuminating clues of cancer buried in prostate mr image: deep learning and expert approaches. Biomolecules 9(11):673CrossRef
Zurück zum Zitat Bisantz A, Roth EM, Watts-Englert J (2015) Study and analysis of complex cognitive work. In: Evaluation of Human Work, edited by John R. Wilson and Sarah Sharples, 61–82. CRC Press. Bisantz A, Roth EM, Watts-Englert J (2015) Study and analysis of complex cognitive work. In: Evaluation of Human Work, edited by John R. Wilson and Sarah Sharples, 61–82. CRC Press.
Zurück zum Zitat Cabour G, Ledoux É, Bassetto S (2021a) Extending system performance past the boundaries of technical maturity: human-agent teamwork perspective for industrial inspection. In: Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021a), edited by Nancy L. Black, W. Patrick Neumann, and Ian Noy, 75–83. Cham: Springer International Publishing. Cabour G, Ledoux É, Bassetto S (2021a) Extending system performance past the boundaries of technical maturity: human-agent teamwork perspective for industrial inspection. In: Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021a), edited by Nancy L. Black, W. Patrick Neumann, and Ian Noy, 75–83. Cham: Springer International Publishing.
Zurück zum Zitat Cabour G, Ledoux É, Bassetto S (2021b) A work-centered approach for cyber-physical-social system design: applications in aerospace industrial inspection. ArXiv:2101.05385 [Cs], January. Cabour G, Ledoux É, Bassetto S (2021b) A work-centered approach for cyber-physical-social system design: applications in aerospace industrial inspection. ArXiv:2101.05385 [Cs], January.
Zurück zum Zitat Cabour G, Ledoux É, Bassetto S (2022) Aligning work analysis and modeling with the engineering goals of a cyber-physical-social system for industrial inspection. Appl Ergonomics. Cabour G, Ledoux É, Bassetto S (2022) Aligning work analysis and modeling with the engineering goals of a cyber-physical-social system for industrial inspection. Appl Ergonomics.
Zurück zum Zitat Chen JY, Procci K, Boyce M, Wright J, Garcia A, Barnes M (2014) Situation Awareness-Based Agent Transparency. Army research lab aberdeen proving ground md human research and engineering Chen JY, Procci K, Boyce M, Wright J, Garcia A, Barnes M (2014) Situation Awareness-Based Agent Transparency. Army research lab aberdeen proving ground md human research and engineering
Zurück zum Zitat Crabtree A, Rouncefield M, Tolmie P (2012) Doing design ethnography. SpringerCrossRef Crabtree A, Rouncefield M, Tolmie P (2012) Doing design ethnography. SpringerCrossRef
Zurück zum Zitat Darius A, Damaševičius R (2014) Gamification of a Project Management System. In: Proc. of Int. conference on advances in computer-human interactions ACHI2014, 200–207. Citeseer. Darius A, Damaševičius R (2014) Gamification of a Project Management System. In: Proc. of Int. conference on advances in computer-human interactions ACHI2014, 200–207. Citeseer.
Zurück zum Zitat Demir M, McNeese NJ, Cooke NJ (2020) Understanding human-robot teams in light of all-human teams: aspects of team interaction and shared cognition. Int J Hum Comput Stud 140:102436CrossRef Demir M, McNeese NJ, Cooke NJ (2020) Understanding human-robot teams in light of all-human teams: aspects of team interaction and shared cognition. Int J Hum Comput Stud 140:102436CrossRef
Zurück zum Zitat Dhanorkar S, Wolf CT, Qian K, Xu A, Popa L, Li Y (2021) Who needs to know what, when?: Broadening the explainable AI (XAI) design space by looking at explanations across the ai lifecycle. In: Designing Interactive Systems Conference 2021, 1591–1602. Virtual Event USA: ACM. https://doi.org/10.1145/3461778.3462131. Dhanorkar S, Wolf CT, Qian K, Xu A, Popa L, Li Y (2021) Who needs to know what, when?: Broadening the explainable AI (XAI) design space by looking at explanations across the ai lifecycle. In: Designing Interactive Systems Conference 2021, 1591–1602. Virtual Event USA: ACM. https://​doi.​org/​10.​1145/​3461778.​3462131.
Zurück zum Zitat Dong H, Kechen Song Yu, He JX, Yan Y, Meng Q (2019) PGA-Net: pyramid feature fusion and global context attention network for automated surface defect detection. IEEE Trans Industr Inf 16(12):7448–7458CrossRef Dong H, Kechen Song Yu, He JX, Yan Y, Meng Q (2019) PGA-Net: pyramid feature fusion and global context attention network for automated surface defect detection. IEEE Trans Industr Inf 16(12):7448–7458CrossRef
Zurück zum Zitat Endsley MR, Hoffman R, Kaber D, Roth E (2007) Cognitive engineering and decision making: an overview and future course. J Cognit Eng Decision Making 1(1):1–21CrossRef Endsley MR, Hoffman R, Kaber D, Roth E (2007) Cognitive engineering and decision making: an overview and future course. J Cognit Eng Decision Making 1(1):1–21CrossRef
Zurück zum Zitat Fidel G, Bitton R, Shabtai A (2020) When Explainability Meets Adversarial Learning: Detecting Adversarial Examples Using SHAP Signatures. In: 2020 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE. Fidel G, Bitton R, Shabtai A (2020) When Explainability Meets Adversarial Learning: Detecting Adversarial Examples Using SHAP Signatures. In: 2020 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE.
Zurück zum Zitat Friedman S, Forbus K, Sherin B (2018) Representing, running, and revising mental models: a computational model. Cogn Sci 42(4):1110–1145CrossRef Friedman S, Forbus K, Sherin B (2018) Representing, running, and revising mental models: a computational model. Cogn Sci 42(4):1110–1145CrossRef
Zurück zum Zitat Goh YM, Micheler S, Sanchez-Salas A, Case K, Bumblauskas D, Monfared R (2020) A variability taxonomy to support automation decision-making for manufacturing processes. Prod Planning Control 31(5):383–399CrossRef Goh YM, Micheler S, Sanchez-Salas A, Case K, Bumblauskas D, Monfared R (2020) A variability taxonomy to support automation decision-making for manufacturing processes. Prod Planning Control 31(5):383–399CrossRef
Zurück zum Zitat Haberfellner R, de Weck O, Fricke E, Vössner S (2019) Process models: systems engineering and others. In: Systems Engineering, 27–98. Springer. Haberfellner R, de Weck O, Fricke E, Vössner S (2019) Process models: systems engineering and others. In: Systems Engineering, 27–98. Springer.
Zurück zum Zitat He Yu, Song K, Meng Q, Yan Y (2019) An end-to-end steel surface defect detection approach via fusing multiple hierarchical features. IEEE Trans Instrum Meas 69(4):1493–1504CrossRef He Yu, Song K, Meng Q, Yan Y (2019) An end-to-end steel surface defect detection approach via fusing multiple hierarchical features. IEEE Trans Instrum Meas 69(4):1493–1504CrossRef
Zurück zum Zitat Henelius A, Puolamäki K, Ukkonen A (2017) Interpreting classifiers through attribute interactions in datasets. ArXiv Preprint ArXiv:1707.07576. Henelius A, Puolamäki K, Ukkonen A (2017) Interpreting classifiers through attribute interactions in datasets. ArXiv Preprint ArXiv:1707.07576.
Zurück zum Zitat Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. ArXiv Preprint ArXiv:1812.04608. Hoffman RR, Mueller ST, Klein G, Litman J (2018) Metrics for explainable AI: challenges and prospects. ArXiv Preprint ArXiv:1812.04608.
Zurück zum Zitat Johnson M, Bradshaw JM (2021) The role of interdependence in trust. In: Trust in Human-Robot Interaction, 379–403. Elsevier. Johnson M, Bradshaw JM (2021) The role of interdependence in trust. In: Trust in Human-Robot Interaction, 379–403. Elsevier.
Zurück zum Zitat Kobrin JL, Sinharay S, Haberman SJ, Chajewski M (2011) An investigation of the fit of linear regression models to data from an SAT® validity study. ETS Research Report Series 2011(1):i–21CrossRef Kobrin JL, Sinharay S, Haberman SJ, Chajewski M (2011) An investigation of the fit of linear regression models to data from an SAT® validity study. ETS Research Report Series 2011(1):i–21CrossRef
Zurück zum Zitat Konig R, Johansson U, Niklasson L (2008) G-REX: A versatile framework for evolutionary data mining. In: 2008 IEEE International Conference on Data Mining Workshops, 971–74. IEEE. Konig R, Johansson U, Niklasson L (2008) G-REX: A versatile framework for evolutionary data mining. In: 2008 IEEE International Conference on Data Mining Workshops, 971–74. IEEE.
Zurück zum Zitat Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1675–84. Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1675–84.
Zurück zum Zitat Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, Müller K-R (2019) Unmasking clever hans predictors and assessing what machines really learn. Nat Commun 10(1):1–8CrossRef Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, Müller K-R (2019) Unmasking clever hans predictors and assessing what machines really learn. Nat Commun 10(1):1–8CrossRef
Zurück zum Zitat Lewis M, Li H, Sycara K (2021) Deep learning, transparency, and trust in human robot teamwork. In: Trust in Human-Robot Interaction, 321–52. Elsevier. Lewis M, Li H, Sycara K (2021) Deep learning, transparency, and trust in human robot teamwork. In: Trust in Human-Robot Interaction, 321–52. Elsevier.
Zurück zum Zitat Lockton D, Brawley L, Aguirre Ulloa M, Prindible M, Forlano L, Rygh K, Fass J, Herzog K, Nissen B (2019) Tangible thinking: materializing how we imagine and understand systems, experiences, and relationships. Lockton D, Brawley L, Aguirre Ulloa M, Prindible M, Forlano L, Rygh K, Fass J, Herzog K, Nissen B (2019) Tangible thinking: materializing how we imagine and understand systems, experiences, and relationships.
Zurück zum Zitat Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. Adv Neural Inform Process Syst 30. Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. Adv Neural Inform Process Syst 30.
Zurück zum Zitat Marcus G, Davis E (2019) Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage. Marcus G, Davis E (2019) Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage.
Zurück zum Zitat Matthews G, Panganiban AR, Lin J, Long M, Schwing M (2021) Super-Machines or Sub-Humans: Mental Models and Trust in Intelligent Autonomous Systems. In: Trust in Human-Robot Interaction, 59–82. Elsevier. Matthews G, Panganiban AR, Lin J, Long M, Schwing M (2021) Super-Machines or Sub-Humans: Mental Models and Trust in Intelligent Autonomous Systems. In: Trust in Human-Robot Interaction, 59–82. Elsevier.
Zurück zum Zitat McMeekin N, Olivia Wu, Germeni E, Briggs A (2020) How methodological frameworks are being developed: evidence from a scoping review. BMC Med Res Methodol 20(1):1–9CrossRef McMeekin N, Olivia Wu, Germeni E, Briggs A (2020) How methodological frameworks are being developed: evidence from a scoping review. BMC Med Res Methodol 20(1):1–9CrossRef
Zurück zum Zitat Milton NR (2007) Knowledge Acquisition in Practice: A Step-by-Step Guide. Springer Science & Business Media. Milton NR (2007) Knowledge Acquisition in Practice: A Step-by-Step Guide. Springer Science & Business Media.
Zurück zum Zitat Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, Spitzer E, Deborah Raji I, Gebru T (2019) Model Cards for Model Reporting. In: Proceedings of the conference on fairness, accountability, and transparency, 220–29. FAT* ‘19. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3287560.3287596. Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, Spitzer E, Deborah Raji I, Gebru T (2019) Model Cards for Model Reporting. In: Proceedings of the conference on fairness, accountability, and transparency, 220–29. FAT* ‘19. New York, NY, USA: Association for Computing Machinery. https://​doi.​org/​10.​1145/​3287560.​3287596.
Zurück zum Zitat Mohseni S, Zarei N, Ragan ED (2021) A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans Interactive Intell Syst (TiiS) 11(3–4):1–45 Mohseni S, Zarei N, Ragan ED (2021) A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans Interactive Intell Syst (TiiS) 11(3–4):1–45
Zurück zum Zitat Moore J (1988) Explanation in expert systems: a survey. In. Moore J (1988) Explanation in expert systems: a survey. In.
Zurück zum Zitat Morales-Forero A, Bassetto S (2019) Case study: a semi-supervised methodology for anomaly detection and diagnosis. In: 2019 IEEE international conference on industrial engineering and engineering management (IEEM), 1031–37. IEEE. Morales-Forero A, Bassetto S (2019) Case study: a semi-supervised methodology for anomaly detection and diagnosis. In: 2019 IEEE international conference on industrial engineering and engineering management (IEEM), 1031–37. IEEE.
Zurück zum Zitat Morales-Forero A, Bassetto S, Coatanea E (in press). Toward safe AI. AI & Society. Morales-Forero A, Bassetto S, Coatanea E (in press). Toward safe AI. AI & Society.
Zurück zum Zitat Mor-Yosef S, Samueloff A, Modan B, Navot D, Schenker JG (1990) Ranking the risk factors for cesarean: logistic regression analysis of a nationwide study. Obstet Gynecol 75(6):944–947 Mor-Yosef S, Samueloff A, Modan B, Navot D, Schenker JG (1990) Ranking the risk factors for cesarean: logistic regression analysis of a nationwide study. Obstet Gynecol 75(6):944–947
Zurück zum Zitat Mueller ST, Hoffman RR, Clancey W, Emrey A, Klein G (2019) Explanation in human-ai systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. ArXiv:1902.01876 [Cs], February. http://arxiv.org/abs/1902.01876. Mueller ST, Hoffman RR, Clancey W, Emrey A, Klein G (2019) Explanation in human-ai systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. ArXiv:1902.01876 [Cs], February. http://​arxiv.​org/​abs/​1902.​01876.
Zurück zum Zitat Muller M, Wolf CT, Andres J, Desmond M, Joshi NN, Ashktorab Z, Sharma A et al. (2021) Designing ground truth and the social life of labels. In: Proceedings of the 2021 CHI conference on human factors in computing systems, 1–16. Yokohama Japan: ACM. https://doi.org/10.1145/3411764.3445402. Muller M, Wolf CT, Andres J, Desmond M, Joshi NN, Ashktorab Z, Sharma A et al. (2021) Designing ground truth and the social life of labels. In: Proceedings of the 2021 CHI conference on human factors in computing systems, 1–16. Yokohama Japan: ACM. https://​doi.​org/​10.​1145/​3411764.​3445402.
Zurück zum Zitat Naiseh M, Jiang N, Ma J, Ali R (2020) Personalising explainable recommendations: literature and conceptualisation. In: World conference on information systems and technologies, 518–33. Springer. Naiseh M, Jiang N, Ma J, Ali R (2020) Personalising explainable recommendations: literature and conceptualisation. In: World conference on information systems and technologies, 518–33. Springer.
Zurück zum Zitat Nickerson RC, Varshney U, Muntermann J (2013) A method for taxonomy development and its application in information systems. Eur J Inf Syst 22(3):336–359CrossRef Nickerson RC, Varshney U, Muntermann J (2013) A method for taxonomy development and its application in information systems. Eur J Inf Syst 22(3):336–359CrossRef
Zurück zum Zitat Nunes I, Jannach D (2017) A systematic review and taxonomy of explanations in decision support and recommender systems. User Model User-Adap Inter 27(3):393–444CrossRef Nunes I, Jannach D (2017) A systematic review and taxonomy of explanations in decision support and recommender systems. User Model User-Adap Inter 27(3):393–444CrossRef
Zurück zum Zitat Pekka AP, Bauer W, Bergmann U, Bieliková M, Bonefeld-Dahl C, Bonnet Y, Bouarfa L (2018) The European commission’s high-level expert group on artificial intelligence: ethics guidelines for trustworthy Ai. Working Document for Stakeholders’ Consultation. Brussels, 1–37. Pekka AP, Bauer W, Bergmann U, Bieliková M, Bonefeld-Dahl C, Bonnet Y, Bouarfa L (2018) The European commission’s high-level expert group on artificial intelligence: ethics guidelines for trustworthy Ai. Working Document for Stakeholders’ Consultation. Brussels, 1–37.
Zurück zum Zitat Poursabzi-Sangdeh F, Goldstein DG, Hofman JM, Wortman Vaughan JW, Wallach H (2021) Manipulating and measuring model interpretability. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–52. CHI ‘21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3411764.3445315. Poursabzi-Sangdeh F, Goldstein DG, Hofman JM, Wortman Vaughan JW, Wallach H (2021) Manipulating and measuring model interpretability. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–52. CHI ‘21. New York, NY, USA: Association for Computing Machinery. https://​doi.​org/​10.​1145/​3411764.​3445315.
Zurück zum Zitat Preece A, Harborne D, Braines D, Tomsett R, Chakraborty S (2018) Stakeholders in explainable AI. ArXiv Preprint ArXiv:1810.00184. Preece A, Harborne D, Braines D, Tomsett R, Chakraborty S (2018) Stakeholders in explainable AI. ArXiv Preprint ArXiv:1810.00184.
Zurück zum Zitat Rathi S (2019) Generating counterfactual and contrastive explanations using SHAP.” ArXiv Preprint ArXiv:1906.09293. Rathi S (2019) Generating counterfactual and contrastive explanations using SHAP.” ArXiv Preprint ArXiv:1906.09293.
Zurück zum Zitat Ribera M, Lapedriza A (2019) Can we do better explanations? A proposal of user-centered explainable AI. Los Angeles, 7. Ribera M, Lapedriza A (2019) Can we do better explanations? A proposal of user-centered explainable AI. Los Angeles, 7.
Zurück zum Zitat Roth EM, Bisantz AM, Wang X, Kim T, Hettinger AZ (2021) A work-centered approach to system user-evaluation. J Cognit Eng Decision Making 15(4):155–174CrossRef Roth EM, Bisantz AM, Wang X, Kim T, Hettinger AZ (2021) A work-centered approach to system user-evaluation. J Cognit Eng Decision Making 15(4):155–174CrossRef
Zurück zum Zitat Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215CrossRef Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215CrossRef
Zurück zum Zitat Salembier P, Wagner I (2021) Studies of work ‘in the wild.’ Computer Supported Cooperative Work (CSCW) 30(2):169–188CrossRef Salembier P, Wagner I (2021) Studies of work ‘in the wild.’ Computer Supported Cooperative Work (CSCW) 30(2):169–188CrossRef
Zurück zum Zitat Sanneman L, Shah JA (2020) A situation awareness-based framework for design and evaluation of explainable AI. In: Calvaresi D, Najjar A, Winikoff M, Främling K (eds) Explainable, transparent autonomous agents and multi-agent systems. Springer International Publishing, Cham, pp 94–110CrossRef Sanneman L, Shah JA (2020) A situation awareness-based framework for design and evaluation of explainable AI. In: Calvaresi D, Najjar A, Winikoff M, Främling K (eds) Explainable, transparent autonomous agents and multi-agent systems. Springer International Publishing, Cham, pp 94–110CrossRef
Zurück zum Zitat Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J (2019) Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. FAT* ‘19. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3287560.3287598. Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J (2019) Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. FAT* ‘19. New York, NY, USA: Association for Computing Machinery. https://​doi.​org/​10.​1145/​3287560.​3287598.
Zurück zum Zitat Shahri A, Hosseini M, Phalp K, Taylor J, Ali R (2014) Towards a code of ethics for gamification at enterprise. In: IFIP working conference on the practice of enterprise modeling, 235–45. Springer. Shahri A, Hosseini M, Phalp K, Taylor J, Ali R (2014) Towards a code of ethics for gamification at enterprise. In: IFIP working conference on the practice of enterprise modeling, 235–45. Springer.
Zurück zum Zitat Shepherd A (2015) Task analysis. In: Evaluation of Human Work, 4th ed. CRC Press. Shepherd A (2015) Task analysis. In: Evaluation of Human Work, 4th ed. CRC Press.
Zurück zum Zitat Shmelova T, Sterenharz A, Dolgikh S (2020) Artificial intelligence in aviation industries: methodologies, education, applications, and opportunities. In: Handbook of research on artificial intelligence applications in the aviation and aerospace industries, 1–35. IGI Global. Shmelova T, Sterenharz A, Dolgikh S (2020) Artificial intelligence in aviation industries: methodologies, education, applications, and opportunities. In: Handbook of research on artificial intelligence applications in the aviation and aerospace industries, 1–35. IGI Global.
Zurück zum Zitat Song K, Yan Y (2013) A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl Surf Sci 285:858–864CrossRef Song K, Yan Y (2013) A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl Surf Sci 285:858–864CrossRef
Zurück zum Zitat St-Vincent M, Vézina N, Bellemare M, Denis D, Ledoux É, Imbeau D (2014) Ergonomic intervention. Institut de recherche Robert-Sauvé en santé et en sécurité du travail. St-Vincent M, Vézina N, Bellemare M, Denis D, Ledoux É, Imbeau D (2014) Ergonomic intervention. Institut de recherche Robert-Sauvé en santé et en sécurité du travail.
Zurück zum Zitat Tomsett R, Widdicombe A, Xing T, Chakraborty S, Julier S, Gurram P, Rao R, Srivastava M (2018) Why the failure? How adversarial examples can provide insights for interpretable machine learning. In: 2018 21st international conference on information fusion (FUSION), 838–45. IEEE. Tomsett R, Widdicombe A, Xing T, Chakraborty S, Julier S, Gurram P, Rao R, Srivastava M (2018) Why the failure? How adversarial examples can provide insights for interpretable machine learning. In: 2018 21st international conference on information fusion (FUSION), 838–45. IEEE.
Zurück zum Zitat Tramer F, Boneh D (2019) Adversarial training and robustness for multiple perturbations. Advances in Neural Information Processing Systems 32. Tramer F, Boneh D (2019) Adversarial training and robustness for multiple perturbations. Advances in Neural Information Processing Systems 32.
Zurück zum Zitat Vicente KJ (1999) Cognitive work analysis: toward safe, productive, and healthy computer-based work. CRC PressCrossRef Vicente KJ (1999) Cognitive work analysis: toward safe, productive, and healthy computer-based work. CRC PressCrossRef
Zurück zum Zitat Xiao C, Li B, Zhu J-Y, He W, Liu M, Song D (2018) Generating adversarial examples with adversarial networks. ArXiv Preprint ArXiv:1801.02610. Xiao C, Li B, Zhu J-Y, He W, Liu M, Song D (2018) Generating adversarial examples with adversarial networks. ArXiv Preprint ArXiv:1801.02610.
Zurück zum Zitat Yeung K (2020) Recommendation of the council on artificial intelligence (OECD). Int Leg Mater 59(1):27–34CrossRef Yeung K (2020) Recommendation of the council on artificial intelligence (OECD). Int Leg Mater 59(1):27–34CrossRef
Zurück zum Zitat Zsambok CE, Klein G (2014) Naturalistic decision making. Psychology PressCrossRef Zsambok CE, Klein G (2014) Naturalistic decision making. Psychology PressCrossRef
Metadaten
Titel
An explanation space to align user studies with the technical development of Explainable AI
verfasst von
Garrick Cabour
Andrés Morales-Forero
Élise Ledoux
Samuel Bassetto
Publikationsdatum
26.07.2022
Verlag
Springer London
Erschienen in
AI & SOCIETY / Ausgabe 2/2023
Print ISSN: 0951-5666
Elektronische ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-022-01536-6

Weitere Artikel der Ausgabe 2/2023

AI & SOCIETY 2/2023 Zur Ausgabe

Premium Partner