Skip to main content

2024 | OriginalPaper | Buchkapitel

Explainable AI Assisted Decision-Making and Human Behaviour

verfasst von : Muhammad Suffian

Erschienen in: Computing, Internet of Things and Data Analytics

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Explainable artificial intelligence (XAI) helps users understand the logic behind machine learning model (ML) predictions so that they can better understand and believe model predictions. Many studies have looked at the interaction between humans and XAI, focusing mainly on metrics such as interpretability, fidelity, transparency, trust and usability of explanations. This paper aims to conduct a user study to explore how different types of explanations in the field of XAI affect people’s understanding and behaviour in decision-making. In behavioural science, nudges and boosts are competing approaches and allow a choice architecture to improve decision-making. In our study, we utilized two types of explanations in XAI as a choice architecture, and unveiled the impactful effects of these explanations on behaviour, resulting in alternative decision-making outcomes. Explanations containing actionable information were found to be more effective and understandable. However, our findings indicate that the information provided by certain XAI techniques may not sufficiently persuade users to understand and trust the explanations offered.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: Results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019) Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: Results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)
3.
Zurück zum Zitat Donadello, I., Dragoni, M., Eccher, C.: Explaining reasoning algorithms with persuasiveness: a case study for a behavioural change system. In: Proceedings of the 35th Annual ACM Symposium on Applied Computing, pp. 646–653 (2020) Donadello, I., Dragoni, M., Eccher, C.: Explaining reasoning algorithms with persuasiveness: a case study for a behavioural change system. In: Proceedings of the 35th Annual ACM Symposium on Applied Computing, pp. 646–653 (2020)
4.
Zurück zum Zitat Erlei, A., Nekdem, F., Meub, L., Anand, A., Gadiraju, U.: Impact of algorithmic decision making on human behavior: evidence from ultimatum bargaining. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 8, pp. 43–52 (2020) Erlei, A., Nekdem, F., Meub, L., Anand, A., Gadiraju, U.: Impact of algorithmic decision making on human behavior: evidence from ultimatum bargaining. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 8, pp. 43–52 (2020)
5.
Zurück zum Zitat Franklin, M., Folke, T., Ruggeri, K.: Optimising nudges and boosts for financial decisions under uncertainty. Palgrave Commun. 5(1) (2019) Franklin, M., Folke, T., Ruggeri, K.: Optimising nudges and boosts for financial decisions under uncertainty. Palgrave Commun. 5(1) (2019)
6.
Zurück zum Zitat Green, B., Chen, Y.: The principles and limits of algorithm-in-the-loop decision making. Proc. ACM Hum. Comput. Interact. 3(CSCW), 1–24 (2019) Green, B., Chen, Y.: The principles and limits of algorithm-in-the-loop decision making. Proc. ACM Hum. Comput. Interact. 3(CSCW), 1–24 (2019)
8.
Zurück zum Zitat Hertwig, R., Mazar, N.: Toward a taxonomy and review of honesty interventions. Curr. Opin. Psychol. 101410 (2022) Hertwig, R., Mazar, N.: Toward a taxonomy and review of honesty interventions. Curr. Opin. Psychol. 101410 (2022)
9.
Zurück zum Zitat Leung, S.O.: A comparison of psychometric properties and normality in 4-, 5-, 6-, and 11-point Likert scales. J. Soc. Serv. Res. 37(4), 412–421 (2011)CrossRef Leung, S.O.: A comparison of psychometric properties and normality in 4-, 5-, 6-, and 11-point Likert scales. J. Soc. Serv. Res. 37(4), 412–421 (2011)CrossRef
10.
11.
Zurück zum Zitat Meske, C., Bunde, E.: Design principles for user interfaces in AI-based decision support systems: the case of explainable hate speech detection. Inf. Syst. Front. 25, 743–773 (2022) Meske, C., Bunde, E.: Design principles for user interfaces in AI-based decision support systems: the case of explainable hate speech detection. Inf. Syst. Front. 25, 743–773 (2022)
12.
Zurück zum Zitat Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)MathSciNetCrossRef Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)MathSciNetCrossRef
13.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
14.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:​1606.​05386 (2016)
15.
Zurück zum Zitat Ruggeri, K.: Behavioral Insights for Public Policy: Concepts and Cases. Routledge, Milton Park (2018) Ruggeri, K.: Behavioral Insights for Public Policy: Concepts and Cases. Routledge, Milton Park (2018)
16.
Zurück zum Zitat Senoner, J., Netland, T., Feuerriegel, S.: Using explainable artificial intelligence to improve process quality: evidence from semiconductor manufacturing. Manage. Sci. 68(8), 5704–5723 (2022)CrossRef Senoner, J., Netland, T., Feuerriegel, S.: Using explainable artificial intelligence to improve process quality: evidence from semiconductor manufacturing. Manage. Sci. 68(8), 5704–5723 (2022)CrossRef
17.
Zurück zum Zitat Spreitzer, N., Haned, H., van der Linden, I.: Evaluating the practicality of counterfactual explanations. In: Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022 (2022) Spreitzer, N., Haned, H., van der Linden, I.: Evaluating the practicality of counterfactual explanations. In: Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022 (2022)
18.
Zurück zum Zitat Stepin, I., Alonso-Moral, J.M., Catala, A., Pereira-Fariña, M.: An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information. Inf. Sci. 618, 379–399 (2022)CrossRef Stepin, I., Alonso-Moral, J.M., Catala, A., Pereira-Fariña, M.: An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information. Inf. Sci. 618, 379–399 (2022)CrossRef
19.
Zurück zum Zitat Suffian, M., Graziani, P., Alonso, J.M., Bogliolo, A.: FCE: feedback based counterfactual explanations for explainable AI. IEEE Access 10, 72363–72372 (2022)CrossRef Suffian, M., Graziani, P., Alonso, J.M., Bogliolo, A.: FCE: feedback based counterfactual explanations for explainable AI. IEEE Access 10, 72363–72372 (2022)CrossRef
21.
Zurück zum Zitat Sunstein, C.R.: Nudging and choice architecture: Ethical considerations. Yale Journal on Regulation, Forthcoming (2015) Sunstein, C.R.: Nudging and choice architecture: Ethical considerations. Yale Journal on Regulation, Forthcoming (2015)
22.
Zurück zum Zitat Thaler, R.H., Sunstein, C.R.: Nudge: Improving Decisions About Health, Wealth, and Happiness. Penguin, New York (2009) Thaler, R.H., Sunstein, C.R.: Nudge: Improving Decisions About Health, Wealth, and Happiness. Penguin, New York (2009)
23.
Zurück zum Zitat Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017) Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)
24.
Zurück zum Zitat Weinmann, M., Schneider, C., Brocke, J.v.: Digital nudging. Bus. Inf. Syst. Eng. 58, 433–436 (2016) Weinmann, M., Schneider, C., Brocke, J.v.: Digital nudging. Bus. Inf. Syst. Eng. 58, 433–436 (2016)
25.
Zurück zum Zitat Yang, F., Huang, Z., Scholtz, J., Arendt, D.L.: How do visual explanations foster end users’ appropriate trust in machine learning? In: Proceedings of the 25th International Conference on Intelligent User Interfaces, pp. 189–201 (2020) Yang, F., Huang, Z., Scholtz, J., Arendt, D.L.: How do visual explanations foster end users’ appropriate trust in machine learning? In: Proceedings of the 25th International Conference on Intelligent User Interfaces, pp. 189–201 (2020)
Metadaten
Titel
Explainable AI Assisted Decision-Making and Human Behaviour
verfasst von
Muhammad Suffian
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-53717-2_36

Premium Partner