Editorial Notes
A corrigendum was issued for this paper on September 16, 2019. You can download the corrigendum from the source materials section of this citation page.
ABSTRACT
From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.
Supplemental Material
Available for Download
Corrigendum to "Designing Theory-Driven User-Centric Explainable AI," by Wang et al., Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
Preview video captions
- Aamodt, A., & Plaza, E. (1994). Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI communications, 7(1), 39--59. Google ScholarDigital Library
- Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., Kankanhalli, M. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '18. Google ScholarDigital Library
- Adams, I. D., Chan, M., Clifford, P. C., Cooke, W. M., Dallos, V., de Dombal, F. T., Edwards, M. H., Hancock, D. M., Hewett, D. J., & McIntyre, N. (1986). Computer aided diagnosis of acute abdominal pain: A multi-center study. British Medical Journal, 293(6550), 800--804.Google ScholarCross Ref
- Altman, N. S. (1992). An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3), 175--185.Google ScholarCross Ref
- Anderson, H. (2015). Scientific Method. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/scientific-method/. Retrieved 10 September 2018.Google Scholar
- Arocha, J. F., Wang, D., & Patel, V. L. (2005). Identifying reasoning strategies in medical decision making: a methodological guide. Journal of biomedical informatics, 38(2), 154--171. Google ScholarDigital Library
- Assad, M., Carmichael, D. J., Kay, J., & Kummerfeld, B. (2007, May). PersonisAD: Distributed, active, scrutable model framework for context-aware services. In International Conference on Pervasive Computing (pp. 55--72). Springer, Berlin, Heidelberg. Google ScholarDigital Library
- Antifakos, S., Schwaninger, A., & Schiele, B. (2004, September). Evaluating the effects of displaying uncertainty in context-aware applications. In International Conference on Ubiquitous Computing (pp. 54--69). Springer, Berlin, Heidelberg.Google Scholar
- Barber, D. (2012). Bayesian reasoning and machine learning. Cambridge University Press. Google ScholarCross Ref
- Barnett, G. O., Cimino, J. J., Hupp, J. A., & Hoffer, E. P. (1987). DXplain: an evolving diagnostic decision-support system. Jama, 258(1), 67--74.Google ScholarCross Ref
- Baron, J. (2000). Thinking and deciding. Cambridge University Press.Google Scholar
- Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 Workshop on Explainable AI (XAI) (p. 8).Google Scholar
- Breese, J. S., Heckerman, D., & Kadie, C. (1998, July). Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (pp. 43--52). Morgan Kaufmann Publishers Inc.. Google ScholarDigital Library
- Buchanan, B. G., & Shortliffe, E. H. (1984). Explanation as a topic of AI research. Rule-based expert systems: the MYCIN experiments of the Stanford Heuristic Programming Project, 331.Google Scholar
- Bussone, A., Stumpf, S., & O'Sullivan, D. (2015, October). The role of explanations on trust and reliance in clinical decision support systems. In Healthcare Informatics (ICHI), 2015 International Conference on (pp. 160--169). IEEE. Google ScholarDigital Library
- Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1721--1730). ACM. Google ScholarDigital Library
- Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785--794). ACM. Google ScholarDigital Library
- Coppers, S., Van den Bergh, J., Luyten, K., Coninx, K., van der Lek-Ciudin, I., Vanallemeersch, T., & Vandeghinste, V. (2018, April). Intellingo: An Intelligible Translation Environment. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 524). ACM. Google ScholarDigital Library
- Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273--297. Google ScholarDigital Library
- Croskerry, P. (2009). A universal model of diagnostic reasoning. Academic medicine, 84(8), 1022--1028.Google ScholarCross Ref
- Croskerry, P. (2009). Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Advances in health sciences education, 14(1), 27--35.Google Scholar
- Croskerry, P. (2017). A Model for Clinical Decision-Making in Medicine. Medical Science Educator, 27(1), 9--13.Google ScholarCross Ref
- Crowley, R. S., Legowski, E., Medvedeva, O., Reitmeyer, K., Tseytlin, E., Castine, M., ... & Mello-Thoms, C. (2013). Automated detection of heuristics and biases among pathologists in a computer-based system. Advances in Health Sciences Education, 18(3), 343--363.Google ScholarCross Ref
- Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598--617). IEEE.Google ScholarCross Ref
- Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.Google Scholar
- Ehsan, U., Harrison, B., Chan, L., & Riedl, M. (2018). Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations, AAAI/ACM Conf. on Artificial Intelligence, Ethics, and Society (AIES), 2018. Google ScholarDigital Library
- Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018, March). Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces (pp. 211--223). ACM. Google ScholarDigital Library
- Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press.Google ScholarCross Ref
- Eslami, M., Krishna Kumaran, S. R., Sandvig, C., & Karahalios, K. (2018, April). Communicating Algorithmic Process in Online Behavioral Advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 432). ACM. Google ScholarDigital Library
- Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115.Google ScholarCross Ref
- Festinger, L. (1954). A theory of social comparison processes. Human relations, 7(2), 117--140.Google Scholar
- Graesser, A.C., Person, N., Huber, J. (1992). Mechanisms that generate questions. In: Lauer, T.W., Peacock, E., Graesser, A.C. (Eds.), Questions and Information Systems. Lawrence Erlbaum, Hillsdale, NJ, pp. 167--187.Google Scholar
- Guba, E. G., & Lincoln, Y. S. (1982). Epistemological and methodological bases of naturalistic inquiry. ECTJ, 30(4), 233--252.Google ScholarCross Ref
- Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., & Giannotti, F. (2018). Local Rule-Based Explanations of Black Box Decision Systems. arXiv preprint arXiv:1805.10820.Google Scholar
- Hamblin, C. L. (1970). fallacies. London: Methuen.Google Scholar
- Harutyunyan, H., Khachatrian, H., Kale, D. C., & Galstyan, A. (2017). Multitask learning and benchmarking with clinical time series data. arXiv preprint arXiv:1703.07771.Google Scholar
- Heider, F. (2013). The psychology of interpersonal relations. Psychology Press.Google ScholarCross Ref
- Herlocker, J., Konstan, J., and Riedl, J. (2000). Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work (CSCW'00). ACM, New York, NY, USA, 241--250. Google ScholarDigital Library
- Hilton, D. J., & Slugoski, B. R. (1986). Knowledge-based causal attribution: The abnormal conditions focus model. Psychological review, 93(1), 75.Google Scholar
- Hoffman, R. R., & Klein, G. (2017). Explaining explanation, part 1: theoretical foundations. IEEE Intelligent Systems, (3), 68--73.Google Scholar
- Hoffman, R. R., Mueller, S. T., & Klein, G. (2017). Explaining Explanation, Part 2: Empirical Foundations. IEEE Intelligent Systems, 32(4), 78--86.Google ScholarDigital Library
- Hoffman, R., Miller, T., Mueller, S. T., Klein, G., & Clancey, W. J. (2018). Explaining Explanation, Part 4: A Deep Dive on Deep Nets. IEEE Intelligent Systems, 33(3), 87--95.Google ScholarCross Ref
- Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv preprint arXiv:1712.09923.Google Scholar
- Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6), 417.Google ScholarCross Ref
- Johnson, A. E., Pollard, T. J., Shen, L., Li-wei, H. L., Feng, M., Ghassemi, M., ... & Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific data, 3, 160035.Google Scholar
- Kahneman, D., & Egan, P. (2011). Thinking, fast and slow (Vol. 1). New York: Farrar, Straus and Giroux.Google Scholar
- Kahng, M., Andrews, P. Y., Kalro, A., & Chau, D. H. P. (2018). A cti v is: Visual exploration of industry-scale deep neural network models. IEEE transactions on visualization and computer graphics, 24(1), 88--97.Google Scholar
- Kay, J. (2001). Learner control. User modeling and user-adapted interaction, 11(1--2), 111--127. Google ScholarDigital Library
- Kim, B., Khanna, R., & Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. In Advances in Neural Information Processing Systems (pp. 2280--2288). Google ScholarDigital Library
- Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., & Viegas, F. (2018, July). Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In International Conference on Machine Learning (pp. 2673--2682).Google Scholar
- Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18--19.Google Scholar
- Klein, G. (2018). Explaining Explanation, Part 3: The Causal Landscape. IEEE Intelligent Systems, 33(2), 83--88.Google ScholarCross Ref
- Koesten, L. M., Kacprzak, E., Tennison, J. F., & Simperl, E. (2017, May). The Trials and Tribulations of Working with Structured Data:-a Study on Information Seeking Behaviour. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 1277--1289). ACM. Google ScholarDigital Library
- Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730. Google ScholarDigital Library
- Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, (8), 30--37. Google ScholarDigital Library
- Krause, J., Perer, A., & Ng, K. (2016, May). Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5686--5697). ACM. Google ScholarDigital Library
- Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097--1105). Google ScholarDigital Library
- Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., & Wong, W. K. (2013, September). Too much, too little, or just right? Ways explanations impact end users' mental models. In Visual Languages and Human-Centric Compxuting (VL/HCC), 2013 IEEE Symposium on (pp. 3--10). IEEE.Google ScholarCross Ref
- Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015, March). Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces (pp. 126--137). ACM. Google ScholarDigital Library
- Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016, August). Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1675--1684). ACM. Google ScholarDigital Library
- Lamond, G. (2006). Precedent and analogy in legal reasoning. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/legal-reas-prec/. Retrieved 10 September 2018.Google Scholar
- Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. arXiv preprint arXiv:1606.04155.Google Scholar
- Letham, B., Rudin, C., McCormick, T. H., & Madigan, D. (2015). Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350--1371.Google ScholarCross Ref
- Lighthall, G. K., & Vazquez-Guillamet, C. (2015). Understanding Decision-Making in Critical Care. Clinical medicine & research, cmr-2015.Google Scholar
- Lim, B. Y., Dey, A. K., & Avrahami, D. (2009, April). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119--2128). ACM. Google ScholarDigital Library
- Lim, B. Y., & Dey, A. K. (2009, September). Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing (pp. 195--204). ACM. Google ScholarDigital Library
- Lim, B. Y., & Dey, A. K. (2010, September). Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM international conference on Ubiquitous computing (pp. 13--22). ACM. Google ScholarDigital Library
- Lim, B. Y., & Dey, A. K. (2011, September). Investigating intelligibility for uncertain context-aware applications. In Proceedings of the 13th international conference on Ubiquitous computing (pp. 415--424). ACM. Google ScholarDigital Library
- Lim, B. Y., & Dey, A. K. (2011, August). Design of an intelligible mobile context-aware application. In Proceedings of the 13th international conference on human computer interaction with mobile devices and services (pp. 157--166). ACM. Google ScholarDigital Library
- Lim, B. Y., & Dey, A. K. (2013, July). Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application. In International Conference on Human-Computer Interaction (pp. 92--101). Springer, Berlin, Heidelberg.Google Scholar
- Lipton, P. (1990). Contrastive explanation. Royal Institute of Philosophy Supplements, 27, 247--266.Google ScholarCross Ref
- Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.Google Scholar
- Lombrozo, T. (2006). The structure and function of explanations. Trends in cognitive sciences, 10(10), 464--470.Google Scholar
- Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (NIPS 2017). (pp. 4765--4774). Google ScholarDigital Library
- Lundberg, S. M., Erion, G. G., & Lee, S. I. (2018). Consistent Individualized Feature Attribution for Tree Ensembles. arXiv preprint arXiv:1802.03888.Google Scholar
- MacQueen, J. (1967, June). Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability (Vol. 1, No. 14, pp. 281--297).Google Scholar
- Markie, P. (2004). Rationalism vs. empiricism. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/rationalism-empiricism. Retrieved 10 September 2018.Google Scholar
- McGuinness, D. L., Ding, L., Da Silva, P. P., & Chang, C. (2007, July). PML 2: A Modular Explanation Interlingua. In ExaCt (pp. 49--55).Google Scholar
- Miller, T. (2017). Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269.Google Scholar
- Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., & Doshi-Velez, F. (2018). How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv preprint arXiv:1802.00682.Google Scholar
- Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27(3--5), 393--444. Google ScholarDigital Library
- Patel, V. L., Arocha, J. F., & Zhang, J. (2005). Thinking and reasoning in medicine. The Cambridge handbook of thinking and reasoning, 14, 727--750.Google Scholar
- Peirce, C. S. (1903). Harvard lectures on pragmatism, Collected Papers v. 5.Google Scholar
- Popper, Karl (2002), Conjectures and Refutations: The Growth of Scientific Knowledge, London, UK: Routledge.Google Scholar
- Quinlan, J. R. (2014). C4. 5: programs for machine learning. Elsevier.Google Scholar
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135--1144). ACM. Google ScholarDigital Library
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence.Google ScholarCross Ref
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Semantically Equivalent Adversarial Rules for Debugging NLP Models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 856--865).Google ScholarCross Ref
- Rosenthal, S., Selvaraj, S. P., & Veloso, M. M. (2016, July). Verbalization: Narration of Autonomous Robot Experience. In IJCAI (pp. 862--868). Google ScholarDigital Library
- Roth-Berghofer, T. R. (2004, August). Explanations and case-based reasoning: Foundational issues. In European Conference on Case-Based Reasoning (pp. 389--403). Springer, Berlin, Heidelberg.Google Scholar
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985). Learning internal representations by error propagation (No. ICS-8506). California Univ San Diego La Jolla Inst for Cognitive Science.Google ScholarCross Ref
- Shortliffe, E. H., & Axline, S. G. (1975). Computer-Based Consultations in Clinical Therapeutics: Explanation and Rule Acquisition Capabilities of the MYCIN.Google Scholar
- Silveira, M.S., de Souza, C.S., and Barbosa, S.D.J. (2001). Semiotic engineering contributions for designing online help systems. In Proceedings of the 19th annual international conference on Computer documentation (SIGDOC '01). ACM, New York, NY, USA, 31--38. Google ScholarDigital Library
- Souillard-Mandar, W., Davis, R., Rudin, C., Au, R., Libon, D. J., Swenson, R., ... & Penney, D. L. (2016). Learning classification models of cognitive conditions from subtle behaviors in the digital clock drawing test. Machine learning, 102(3), 393--441. Google ScholarDigital Library
- Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities. Lawrence Erlbaum.Google Scholar
- Sunstein, C. R. (1993). On analogical reasoning. Harvard Law Review, 106(3), 741--791.Google ScholarCross Ref
- Swartout, W. R. (1983). What Kind of Expert Should a System Be? XPLAIN: A System for Creating and Explaining Expert Consulting Programs. Artificial Intelligence, (21), 285--325. Google ScholarDigital Library
- Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction, 22(4--5), 399--439. Google ScholarDigital Library
- Toulmin, S. E. (1958). The Uses of Argument, by Stephen Edelston Toulmin,... University Press.Google Scholar
- Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297--323.Google ScholarCross Ref
- Veale, M., Van Kleek, M., & Binns, R. (2018, April). Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 440). ACM. Google ScholarDigital Library
- Vermeulen, J., Luyten, K., van den Hoven, E., & Coninx, K. (2013, April). Crossing the bridge over Norman's Gulf of Execution: revealing feedforward's true identity. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1931--1940). ACM. Google ScholarDigital Library
- Vickers, John (2009). The Problem of Induction. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/induction-problem/. Retrieved 10 September 2018.Google Scholar
- Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec), 3371--3408. Google ScholarDigital Library
- Von Neumann, J., & Morgenstern, O. (2007). Theory of games and economic behavior (commemorative edition). Princeton university press.Google Scholar
- Weirich, P. (2008). Causal decision theory. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/decision-causal. Retrieved 10 September 2018.Google Scholar
- Whewell, W. (1989). Theory of scientific method. Hackett Publishing.Google Scholar
- Zhang, Q., & Li, H. (2007). MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on evolutionary computation, 11(6), 712--731. Google ScholarDigital Library
- Zhang, Q. S., & Zhu, S. C. (2018). Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27--39.Google ScholarCross Ref
Index Terms
- Designing Theory-Driven User-Centric Explainable AI
Recommendations
A unified and practical user-centric framework for explainable artificial intelligence
AbstractAdoption of artificial intelligence (AI) is causing a paradigm change in many fields. Its practical utilization, however, especially in safety-critical applications like medicine, remains limited, mainly due to the black-box nature of most ...
Explainable artificial intelligence: a comprehensive review
AbstractThanks to the exponential growth in computing power and vast amounts of data, artificial intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be ubiquitously adopted in our daily lives. Even though AI-powered ...
Explainable artificial intelligence, lawyer's perspective
ICAIL '21: Proceedings of the Eighteenth International Conference on Artificial Intelligence and LawExplainable artificial intelligence (XAI) is a research direction that was already put under scrutiny, in particular in the AI&Law community. Whilst there were notable developments in the area of (general, not necessarily legal) XAI, user experience ...
Comments