skip to main content
10.1145/3290605.3300831acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Designing Theory-Driven User-Centric Explainable AI

Authors Info & Claims
Published:02 May 2019Publication History

Editorial Notes

A corrigendum was issued for this paper on September 16, 2019. You can download the corrigendum from the source materials section of this citation page.

ABSTRACT

From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.

Skip Supplemental Material Section

Supplemental Material

paper601p.mp4

mp4

1 MB

paper601.mp4

mp4

235.1 MB

References

  1. Aamodt, A., & Plaza, E. (1994). Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI communications, 7(1), 39--59. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., Kankanhalli, M. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Adams, I. D., Chan, M., Clifford, P. C., Cooke, W. M., Dallos, V., de Dombal, F. T., Edwards, M. H., Hancock, D. M., Hewett, D. J., & McIntyre, N. (1986). Computer aided diagnosis of acute abdominal pain: A multi-center study. British Medical Journal, 293(6550), 800--804.Google ScholarGoogle ScholarCross RefCross Ref
  4. Altman, N. S. (1992). An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3), 175--185.Google ScholarGoogle ScholarCross RefCross Ref
  5. Anderson, H. (2015). Scientific Method. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/scientific-method/. Retrieved 10 September 2018.Google ScholarGoogle Scholar
  6. Arocha, J. F., Wang, D., & Patel, V. L. (2005). Identifying reasoning strategies in medical decision making: a methodological guide. Journal of biomedical informatics, 38(2), 154--171. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Assad, M., Carmichael, D. J., Kay, J., & Kummerfeld, B. (2007, May). PersonisAD: Distributed, active, scrutable model framework for context-aware services. In International Conference on Pervasive Computing (pp. 55--72). Springer, Berlin, Heidelberg. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Antifakos, S., Schwaninger, A., & Schiele, B. (2004, September). Evaluating the effects of displaying uncertainty in context-aware applications. In International Conference on Ubiquitous Computing (pp. 54--69). Springer, Berlin, Heidelberg.Google ScholarGoogle Scholar
  9. Barber, D. (2012). Bayesian reasoning and machine learning. Cambridge University Press. Google ScholarGoogle ScholarCross RefCross Ref
  10. Barnett, G. O., Cimino, J. J., Hupp, J. A., & Hoffer, E. P. (1987). DXplain: an evolving diagnostic decision-support system. Jama, 258(1), 67--74.Google ScholarGoogle ScholarCross RefCross Ref
  11. Baron, J. (2000). Thinking and deciding. Cambridge University Press.Google ScholarGoogle Scholar
  12. Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 Workshop on Explainable AI (XAI) (p. 8).Google ScholarGoogle Scholar
  13. Breese, J. S., Heckerman, D., & Kadie, C. (1998, July). Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (pp. 43--52). Morgan Kaufmann Publishers Inc.. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Buchanan, B. G., & Shortliffe, E. H. (1984). Explanation as a topic of AI research. Rule-based expert systems: the MYCIN experiments of the Stanford Heuristic Programming Project, 331.Google ScholarGoogle Scholar
  15. Bussone, A., Stumpf, S., & O'Sullivan, D. (2015, October). The role of explanations on trust and reliance in clinical decision support systems. In Healthcare Informatics (ICHI), 2015 International Conference on (pp. 160--169). IEEE. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1721--1730). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785--794). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Coppers, S., Van den Bergh, J., Luyten, K., Coninx, K., van der Lek-Ciudin, I., Vanallemeersch, T., & Vandeghinste, V. (2018, April). Intellingo: An Intelligible Translation Environment. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 524). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273--297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Croskerry, P. (2009). A universal model of diagnostic reasoning. Academic medicine, 84(8), 1022--1028.Google ScholarGoogle ScholarCross RefCross Ref
  21. Croskerry, P. (2009). Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Advances in health sciences education, 14(1), 27--35.Google ScholarGoogle Scholar
  22. Croskerry, P. (2017). A Model for Clinical Decision-Making in Medicine. Medical Science Educator, 27(1), 9--13.Google ScholarGoogle ScholarCross RefCross Ref
  23. Crowley, R. S., Legowski, E., Medvedeva, O., Reitmeyer, K., Tseytlin, E., Castine, M., ... & Mello-Thoms, C. (2013). Automated detection of heuristics and biases among pathologists in a computer-based system. Advances in Health Sciences Education, 18(3), 343--363.Google ScholarGoogle ScholarCross RefCross Ref
  24. Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on (pp. 598--617). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  25. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.Google ScholarGoogle Scholar
  26. Ehsan, U., Harrison, B., Chan, L., & Riedl, M. (2018). Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations, AAAI/ACM Conf. on Artificial Intelligence, Ethics, and Society (AIES), 2018. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018, March). Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces (pp. 211--223). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press.Google ScholarGoogle ScholarCross RefCross Ref
  29. Eslami, M., Krishna Kumaran, S. R., Sandvig, C., & Karahalios, K. (2018, April). Communicating Algorithmic Process in Online Behavioral Advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 432). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115.Google ScholarGoogle ScholarCross RefCross Ref
  31. Festinger, L. (1954). A theory of social comparison processes. Human relations, 7(2), 117--140.Google ScholarGoogle Scholar
  32. Graesser, A.C., Person, N., Huber, J. (1992). Mechanisms that generate questions. In: Lauer, T.W., Peacock, E., Graesser, A.C. (Eds.), Questions and Information Systems. Lawrence Erlbaum, Hillsdale, NJ, pp. 167--187.Google ScholarGoogle Scholar
  33. Guba, E. G., & Lincoln, Y. S. (1982). Epistemological and methodological bases of naturalistic inquiry. ECTJ, 30(4), 233--252.Google ScholarGoogle ScholarCross RefCross Ref
  34. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., & Giannotti, F. (2018). Local Rule-Based Explanations of Black Box Decision Systems. arXiv preprint arXiv:1805.10820.Google ScholarGoogle Scholar
  35. Hamblin, C. L. (1970). fallacies. London: Methuen.Google ScholarGoogle Scholar
  36. Harutyunyan, H., Khachatrian, H., Kale, D. C., & Galstyan, A. (2017). Multitask learning and benchmarking with clinical time series data. arXiv preprint arXiv:1703.07771.Google ScholarGoogle Scholar
  37. Heider, F. (2013). The psychology of interpersonal relations. Psychology Press.Google ScholarGoogle ScholarCross RefCross Ref
  38. Herlocker, J., Konstan, J., and Riedl, J. (2000). Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work (CSCW'00). ACM, New York, NY, USA, 241--250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Hilton, D. J., & Slugoski, B. R. (1986). Knowledge-based causal attribution: The abnormal conditions focus model. Psychological review, 93(1), 75.Google ScholarGoogle Scholar
  40. Hoffman, R. R., & Klein, G. (2017). Explaining explanation, part 1: theoretical foundations. IEEE Intelligent Systems, (3), 68--73.Google ScholarGoogle Scholar
  41. Hoffman, R. R., Mueller, S. T., & Klein, G. (2017). Explaining Explanation, Part 2: Empirical Foundations. IEEE Intelligent Systems, 32(4), 78--86.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Hoffman, R., Miller, T., Mueller, S. T., Klein, G., & Clancey, W. J. (2018). Explaining Explanation, Part 4: A Deep Dive on Deep Nets. IEEE Intelligent Systems, 33(3), 87--95.Google ScholarGoogle ScholarCross RefCross Ref
  43. Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv preprint arXiv:1712.09923.Google ScholarGoogle Scholar
  44. Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of educational psychology, 24(6), 417.Google ScholarGoogle ScholarCross RefCross Ref
  45. Johnson, A. E., Pollard, T. J., Shen, L., Li-wei, H. L., Feng, M., Ghassemi, M., ... & Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific data, 3, 160035.Google ScholarGoogle Scholar
  46. Kahneman, D., & Egan, P. (2011). Thinking, fast and slow (Vol. 1). New York: Farrar, Straus and Giroux.Google ScholarGoogle Scholar
  47. Kahng, M., Andrews, P. Y., Kalro, A., & Chau, D. H. P. (2018). A cti v is: Visual exploration of industry-scale deep neural network models. IEEE transactions on visualization and computer graphics, 24(1), 88--97.Google ScholarGoogle Scholar
  48. Kay, J. (2001). Learner control. User modeling and user-adapted interaction, 11(1--2), 111--127. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Kim, B., Khanna, R., & Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. In Advances in Neural Information Processing Systems (pp. 2280--2288). Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., & Viegas, F. (2018, July). Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In International Conference on Machine Learning (pp. 2673--2682).Google ScholarGoogle Scholar
  51. Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18--19.Google ScholarGoogle Scholar
  52. Klein, G. (2018). Explaining Explanation, Part 3: The Causal Landscape. IEEE Intelligent Systems, 33(2), 83--88.Google ScholarGoogle ScholarCross RefCross Ref
  53. Koesten, L. M., Kacprzak, E., Tennison, J. F., & Simperl, E. (2017, May). The Trials and Tribulations of Working with Structured Data:-a Study on Information Seeking Behaviour. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 1277--1289). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, (8), 30--37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Krause, J., Perer, A., & Ng, K. (2016, May). Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5686--5697). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097--1105). Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., & Wong, W. K. (2013, September). Too much, too little, or just right? Ways explanations impact end users' mental models. In Visual Languages and Human-Centric Compxuting (VL/HCC), 2013 IEEE Symposium on (pp. 3--10). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  59. Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015, March). Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces (pp. 126--137). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016, August). Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1675--1684). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Lamond, G. (2006). Precedent and analogy in legal reasoning. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/legal-reas-prec/. Retrieved 10 September 2018.Google ScholarGoogle Scholar
  62. Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. arXiv preprint arXiv:1606.04155.Google ScholarGoogle Scholar
  63. Letham, B., Rudin, C., McCormick, T. H., & Madigan, D. (2015). Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350--1371.Google ScholarGoogle ScholarCross RefCross Ref
  64. Lighthall, G. K., & Vazquez-Guillamet, C. (2015). Understanding Decision-Making in Critical Care. Clinical medicine & research, cmr-2015.Google ScholarGoogle Scholar
  65. Lim, B. Y., Dey, A. K., & Avrahami, D. (2009, April). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119--2128). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Lim, B. Y., & Dey, A. K. (2009, September). Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing (pp. 195--204). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Lim, B. Y., & Dey, A. K. (2010, September). Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM international conference on Ubiquitous computing (pp. 13--22). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Lim, B. Y., & Dey, A. K. (2011, September). Investigating intelligibility for uncertain context-aware applications. In Proceedings of the 13th international conference on Ubiquitous computing (pp. 415--424). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Lim, B. Y., & Dey, A. K. (2011, August). Design of an intelligible mobile context-aware application. In Proceedings of the 13th international conference on human computer interaction with mobile devices and services (pp. 157--166). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Lim, B. Y., & Dey, A. K. (2013, July). Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application. In International Conference on Human-Computer Interaction (pp. 92--101). Springer, Berlin, Heidelberg.Google ScholarGoogle Scholar
  71. Lipton, P. (1990). Contrastive explanation. Royal Institute of Philosophy Supplements, 27, 247--266.Google ScholarGoogle ScholarCross RefCross Ref
  72. Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490.Google ScholarGoogle Scholar
  73. Lombrozo, T. (2006). The structure and function of explanations. Trends in cognitive sciences, 10(10), 464--470.Google ScholarGoogle Scholar
  74. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (NIPS 2017). (pp. 4765--4774). Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Lundberg, S. M., Erion, G. G., & Lee, S. I. (2018). Consistent Individualized Feature Attribution for Tree Ensembles. arXiv preprint arXiv:1802.03888.Google ScholarGoogle Scholar
  76. MacQueen, J. (1967, June). Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability (Vol. 1, No. 14, pp. 281--297).Google ScholarGoogle Scholar
  77. Markie, P. (2004). Rationalism vs. empiricism. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/rationalism-empiricism. Retrieved 10 September 2018.Google ScholarGoogle Scholar
  78. McGuinness, D. L., Ding, L., Da Silva, P. P., & Chang, C. (2007, July). PML 2: A Modular Explanation Interlingua. In ExaCt (pp. 49--55).Google ScholarGoogle Scholar
  79. Miller, T. (2017). Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269.Google ScholarGoogle Scholar
  80. Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., & Doshi-Velez, F. (2018). How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv preprint arXiv:1802.00682.Google ScholarGoogle Scholar
  81. Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27(3--5), 393--444. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Patel, V. L., Arocha, J. F., & Zhang, J. (2005). Thinking and reasoning in medicine. The Cambridge handbook of thinking and reasoning, 14, 727--750.Google ScholarGoogle Scholar
  83. Peirce, C. S. (1903). Harvard lectures on pragmatism, Collected Papers v. 5.Google ScholarGoogle Scholar
  84. Popper, Karl (2002), Conjectures and Refutations: The Growth of Scientific Knowledge, London, UK: Routledge.Google ScholarGoogle Scholar
  85. Quinlan, J. R. (2014). C4. 5: programs for machine learning. Elsevier.Google ScholarGoogle Scholar
  86. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135--1144). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  88. Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Semantically Equivalent Adversarial Rules for Debugging NLP Models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 856--865).Google ScholarGoogle ScholarCross RefCross Ref
  89. Rosenthal, S., Selvaraj, S. P., & Veloso, M. M. (2016, July). Verbalization: Narration of Autonomous Robot Experience. In IJCAI (pp. 862--868). Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Roth-Berghofer, T. R. (2004, August). Explanations and case-based reasoning: Foundational issues. In European Conference on Case-Based Reasoning (pp. 389--403). Springer, Berlin, Heidelberg.Google ScholarGoogle Scholar
  91. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985). Learning internal representations by error propagation (No. ICS-8506). California Univ San Diego La Jolla Inst for Cognitive Science.Google ScholarGoogle ScholarCross RefCross Ref
  92. Shortliffe, E. H., & Axline, S. G. (1975). Computer-Based Consultations in Clinical Therapeutics: Explanation and Rule Acquisition Capabilities of the MYCIN.Google ScholarGoogle Scholar
  93. Silveira, M.S., de Souza, C.S., and Barbosa, S.D.J. (2001). Semiotic engineering contributions for designing online help systems. In Proceedings of the 19th annual international conference on Computer documentation (SIGDOC '01). ACM, New York, NY, USA, 31--38. Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Souillard-Mandar, W., Davis, R., Rudin, C., Au, R., Libon, D. J., Swenson, R., ... & Penney, D. L. (2016). Learning classification models of cognitive conditions from subtle behaviors in the digital clock drawing test. Machine learning, 102(3), 393--441. Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Sternberg, R. J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities. Lawrence Erlbaum.Google ScholarGoogle Scholar
  96. Sunstein, C. R. (1993). On analogical reasoning. Harvard Law Review, 106(3), 741--791.Google ScholarGoogle ScholarCross RefCross Ref
  97. Swartout, W. R. (1983). What Kind of Expert Should a System Be? XPLAIN: A System for Creating and Explaining Expert Consulting Programs. Artificial Intelligence, (21), 285--325. Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction, 22(4--5), 399--439. Google ScholarGoogle ScholarDigital LibraryDigital Library
  99. Toulmin, S. E. (1958). The Uses of Argument, by Stephen Edelston Toulmin,... University Press.Google ScholarGoogle Scholar
  100. Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297--323.Google ScholarGoogle ScholarCross RefCross Ref
  101. Veale, M., Van Kleek, M., & Binns, R. (2018, April). Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 440). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. Vermeulen, J., Luyten, K., van den Hoven, E., & Coninx, K. (2013, April). Crossing the bridge over Norman's Gulf of Execution: revealing feedforward's true identity. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1931--1940). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. Vickers, John (2009). The Problem of Induction. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/induction-problem/. Retrieved 10 September 2018.Google ScholarGoogle Scholar
  104. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P. A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec), 3371--3408. Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. Von Neumann, J., & Morgenstern, O. (2007). Theory of games and economic behavior (commemorative edition). Princeton university press.Google ScholarGoogle Scholar
  106. Weirich, P. (2008). Causal decision theory. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/decision-causal. Retrieved 10 September 2018.Google ScholarGoogle Scholar
  107. Whewell, W. (1989). Theory of scientific method. Hackett Publishing.Google ScholarGoogle Scholar
  108. Zhang, Q., & Li, H. (2007). MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on evolutionary computation, 11(6), 712--731. Google ScholarGoogle ScholarDigital LibraryDigital Library
  109. Zhang, Q. S., & Zhu, S. C. (2018). Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27--39.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Designing Theory-Driven User-Centric Explainable AI

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
        May 2019
        9077 pages
        ISBN:9781450359702
        DOI:10.1145/3290605

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 May 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        CHI '19 Paper Acceptance Rate703of2,958submissions,24%Overall Acceptance Rate6,199of26,314submissions,24%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format