Skip to main content
Erschienen in: Autonomous Agents and Multi-Agent Systems 6/2019

13.05.2019

Explainability in human–agent systems

verfasst von: Avi Rosenfeld, Ariella Richardson

Erschienen in: Autonomous Agents and Multi-Agent Systems | Ausgabe 6/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper presents a taxonomy of explainability in human–agent systems. We consider fundamental questions about the Why, Who, What, When and How of explainability. First, we define explainability, and its relationship to the related terms of interpretability, transparency, explicitness, and faithfulness. These definitions allow us to answer why explainability is needed in the system, whom it is geared to and what explanations can be generated to meet this need. We then consider when the user should be presented with this information. Last, we consider how objective and subjective measures can be used to evaluate the entire system. This last question is the most encompassing as it will need to evaluate all other issues regarding explainability.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 582:1–582:18). Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 582:1–582:18).
2.
Zurück zum Zitat Achinstein, P. (1983). The nature of explanation. Oxford: Oxford University Press. Achinstein, P. (1983). The nature of explanation. Oxford: Oxford University Press.
3.
Zurück zum Zitat Adam, F. (2008). Encyclopedia of decision making and decision support technologies (Vol. 2). Pennsylvania: IGI Global. Adam, F. (2008). Encyclopedia of decision making and decision support technologies (Vol. 2). Pennsylvania: IGI Global.
4.
Zurück zum Zitat Ahmad, M. A., Teredesai, A., & Eckert, C. (2018). Interpretable machine learning in healthcare. In IEEE international conference on healthcare informatics (ICHI) (pp. 447–447). Ahmad, M. A., Teredesai, A., & Eckert, C. (2018). Interpretable machine learning in healthcare. In IEEE international conference on healthcare informatics (ICHI) (pp. 447–447).
5.
Zurück zum Zitat Alvarez-Melis, D., & Jaakkola, T. S. (2018). Towards robust interpretability with self-explaining neural networks. In CoRR. arXiv:1806.07538. Alvarez-Melis, D., & Jaakkola, T. S. (2018). Towards robust interpretability with self-explaining neural networks. In CoRR. arXiv:​1806.​07538.
6.
Zurück zum Zitat Amir, O., & Gal, K. (2013). Plan recognition and visualization in exploratory learning environments. ACM Transactions on Interactive Intelligent Systems (TiiS), 3(3), 16. Amir, O., & Gal, K. (2013). Plan recognition and visualization in exploratory learning environments. ACM Transactions on Interactive Intelligent Systems (TiiS), 3(3), 16.
7.
Zurück zum Zitat Arbatli, A. D., & Levent Akin, H. (1997). Rule extraction from trained neural networks using genetic algorithms. Nonlinear Analysis: Theory, Methods & Applications, 30(3), 1639–1648.MATH Arbatli, A. D., & Levent Akin, H. (1997). Rule extraction from trained neural networks using genetic algorithms. Nonlinear Analysis: Theory, Methods & Applications, 30(3), 1639–1648.MATH
8.
Zurück zum Zitat Augasta, M. G., & Kathirvalavakumar, T. (2012). Reverse engineering the neural networks for rule extraction in classification problems. Neural Processing Letters, 35(2), 131–150. Augasta, M. G., & Kathirvalavakumar, T. (2012). Reverse engineering the neural networks for rule extraction in classification problems. Neural Processing Letters, 35(2), 131–150.
9.
Zurück zum Zitat Azaria, A., Rabinovich, Z., Goldman, C. V., & Kraus, S. (2015). Strategic information disclosure to people with multiple alternatives. ACM Transactions on Intelligent Systems and Technology (TIST), 5(4), 64. Azaria, A., Rabinovich, Z., Goldman, C. V., & Kraus, S. (2015). Strategic information disclosure to people with multiple alternatives. ACM Transactions on Intelligent Systems and Technology (TIST), 5(4), 64.
10.
Zurück zum Zitat Azaria, A., Richardson, A., & Kraus, S. (2014). An agent for the prospect presentation problem. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 989–996). International Foundation for Autonomous Agents and Multiagent Systems. Azaria, A., Richardson, A., & Kraus, S. (2014). An agent for the prospect presentation problem. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 989–996). International Foundation for Autonomous Agents and Multiagent Systems.
11.
Zurück zum Zitat Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to explain individual classification decisions. Journal of Machine Learning Research, 11, 1803–1831.MathSciNetMATH Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to explain individual classification decisions. Journal of Machine Learning Research, 11, 1803–1831.MathSciNetMATH
12.
Zurück zum Zitat Bagley, S. C., White, H., & Golomb, B. A. (2001). Logistic regression in the medical literature: Standards for use and reporting, with particular attention to one medical domain. Journal of Clinical Epidemiology, 54(10), 979–985. Bagley, S. C., White, H., & Golomb, B. A. (2001). Logistic regression in the medical literature: Standards for use and reporting, with particular attention to one medical domain. Journal of Clinical Epidemiology, 54(10), 979–985.
13.
Zurück zum Zitat Barrett, S., Rosenfeld, A., Kraus, S., & Stone, P. (2017). Making friends on the fly: Cooperating with new teammates. Artificial Intelligence, 242, 132–171.MathSciNetMATH Barrett, S., Rosenfeld, A., Kraus, S., & Stone, P. (2017). Making friends on the fly: Cooperating with new teammates. Artificial Intelligence, 242, 132–171.MathSciNetMATH
14.
Zurück zum Zitat Bellazzi, R., & Zupan, B. (2008). Predictive data mining in clinical medicine: Current issues and guidelines. International Journal of Medical Informatics, 77(2), 81–97. Bellazzi, R., & Zupan, B. (2008). Predictive data mining in clinical medicine: Current issues and guidelines. International Journal of Medical Informatics, 77(2), 81–97.
15.
Zurück zum Zitat Bien, J., & Tibshirani, R. (2011). Prototype selection for interpretable classification. The Annals of Applied Statistics, 5, 2403–2424.MathSciNetMATH Bien, J., & Tibshirani, R. (2011). Prototype selection for interpretable classification. The Annals of Applied Statistics, 5, 2403–2424.MathSciNetMATH
16.
Zurück zum Zitat Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI). Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI).
17.
Zurück zum Zitat Boz, O. (2002). Extracting decision trees from trained neural networks. In Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 456–461). Boz, O. (2002). Extracting decision trees from trained neural networks. In Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 456–461).
18.
Zurück zum Zitat Brooke, J., et al. (1996). SUS—A quick and dirty usability scale. Usability Evaluation in Industry, 189(194), 4–7. Brooke, J., et al. (1996). SUS—A quick and dirty usability scale. Usability Evaluation in Industry, 189(194), 4–7.
19.
Zurück zum Zitat Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721–1730). Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721–1730).
20.
Zurück zum Zitat Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Technical report, Army Research Lab Aberdeen Proving Ground MD Human Research and Engineering Directorate. Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Technical report, Army Research Lab Aberdeen Proving Ground MD Human Research and Engineering Directorate.
21.
Zurück zum Zitat Cheng, J., & Greiner, R. (1999). Comparing Bayesian network classifiers. In Proceedings of the fifteenth conference on uncertainty in artificial intelligence (pp. 101–108). Cheng, J., & Greiner, R. (1999). Comparing Bayesian network classifiers. In Proceedings of the fifteenth conference on uncertainty in artificial intelligence (pp. 101–108).
22.
Zurück zum Zitat Chipman, H. A., George, E. I., & Mcculloch, R. E. (1998). Making sense of a forest of trees. In Proceedings of the 30th symposium on the interface (pp. 84–92). Chipman, H. A., George, E. I., & Mcculloch, R. E. (1998). Making sense of a forest of trees. In Proceedings of the 30th symposium on the interface (pp. 84–92).
23.
Zurück zum Zitat Clancey, W. J. (1983). The epistemology of a rule-based expert system—A framework for explanation. Artificial Intelligence, 20(3), 215–251. Clancey, W. J. (1983). The epistemology of a rule-based expert system—A framework for explanation. Artificial Intelligence, 20(3), 215–251.
24.
Zurück zum Zitat Clancey, W. J., & Letsinger, R. (1982). NEOMYCIN: Reconfiguring a rule-based expert system for application to teaching. Stanford: Department of Computer Science, Stanford University. Clancey, W. J., & Letsinger, R. (1982). NEOMYCIN: Reconfiguring a rule-based expert system for application to teaching. Stanford: Department of Computer Science, Stanford University.
25.
Zurück zum Zitat Clark, P., & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3(4), 261–283. Clark, P., & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3(4), 261–283.
26.
Zurück zum Zitat Corchado, J. M., & Laza, R. (2003). Constructing deliberative agents with case-based reasoning technology. International Journal of Intelligent Systems, 18(12), 1227–1241. Corchado, J. M., & Laza, R. (2003). Constructing deliberative agents with case-based reasoning technology. International Journal of Intelligent Systems, 18(12), 1227–1241.
27.
Zurück zum Zitat Cortez, P., & Embrechts, M. J. (2013). Using sensitivity analysis and visualization techniques to open black box data mining models. Information Sciences, 225, 1–17. Cortez, P., & Embrechts, M. J. (2013). Using sensitivity analysis and visualization techniques to open black box data mining models. Information Sciences, 225, 1–17.
28.
Zurück zum Zitat Cox, M. T., & Raja, A. (2011). Metareasoning: Thinking about thinking. Cambridge: MIT Press. Cox, M. T., & Raja, A. (2011). Metareasoning: Thinking about thinking. Cambridge: MIT Press.
29.
Zurück zum Zitat Craven, M. W., & Shavlik, J. W. (1994). Using sampling and queries to extract rules from trained neural networks. In Machine learning proceedings 1994 (pp. 37–45). Craven, M. W., & Shavlik, J. W. (1994). Using sampling and queries to extract rules from trained neural networks. In Machine learning proceedings 1994 (pp. 37–45).
30.
Zurück zum Zitat Craven, M. W., & Shavlik, J. W. (1995). Extracting tree-structured representations of trained networks. In Proceedings of the 8th international conference on neural information processing systems, NIPS’95 (pp. 24–30). Cambridge, MA, USA: MIT Press. Craven, M. W., & Shavlik, J. W. (1995). Extracting tree-structured representations of trained networks. In Proceedings of the 8th international conference on neural information processing systems, NIPS’95 (pp. 24–30). Cambridge, MA, USA: MIT Press.
34.
Zurück zum Zitat Domingos, P. (1998). Knowledge discovery via multiple models. Intelligent Data Analysis, 2(3), 187–202. Domingos, P. (1998). Knowledge discovery via multiple models. Intelligent Data Analysis, 2(3), 187–202.
35.
Zurück zum Zitat Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. In Proceedings of the first international workshop on comprehensibility and explanation in AI and ML. Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. In Proceedings of the first international workshop on comprehensibility and explanation in AI and ML.
36.
Zurück zum Zitat Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:​1702.​08608.
37.
Zurück zum Zitat Dreiseitl, S., & Ohno-Machado, L. (2002). Logistic regression and artificial neural network classification models: A methodology review. Journal of Biomedical Informatics, 35(5–6), 352–359. Dreiseitl, S., & Ohno-Machado, L. (2002). Logistic regression and artificial neural network classification models: A methodology review. Journal of Biomedical Informatics, 35(5–6), 352–359.
38.
Zurück zum Zitat Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (pp. 214–226). Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (pp. 214–226).
39.
Zurück zum Zitat Fong, R. C., & Vedaldi, A. (2017). Interpretable explanations of black boxes by meaningful perturbation. In IEEE international conference on computer vision (ICCV) (pp. 3449–3457). Fong, R. C., & Vedaldi, A. (2017). Interpretable explanations of black boxes by meaningful perturbation. In IEEE international conference on computer vision (ICCV) (pp. 3449–3457).
41.
Zurück zum Zitat Freitas, A. A. (2014). Comprehensible classification models: A position paper. SIGKDD Explorations Newsletter, 15(1), 1–10. Freitas, A. A. (2014). Comprehensible classification models: A position paper. SIGKDD Explorations Newsletter, 15(1), 1–10.
42.
Zurück zum Zitat Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29, 1189–1232.MathSciNetMATH Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29, 1189–1232.MathSciNetMATH
43.
44.
Zurück zum Zitat Garfinkel, S., Matthews, J., Shapiro, S. S., & Smith, J. M. (2017). Toward algorithmic transparency and accountability. Communications of the ACM, 60(9), 5–5. Garfinkel, S., Matthews, J., Shapiro, S. S., & Smith, J. M. (2017). Toward algorithmic transparency and accountability. Communications of the ACM, 60(9), 5–5.
45.
Zurück zum Zitat Gelderman, M. (1998). The relation between user satisfaction, usage of information systems and performance. Information & Management, 34(1), 11–18. Gelderman, M. (1998). The relation between user satisfaction, usage of information systems and performance. Information & Management, 34(1), 11–18.
46.
Zurück zum Zitat Gilbert, N. (1989). Explanation and dialogue. The Knowledge Engineering Review, 4(3), 235–247. Gilbert, N. (1989). Explanation and dialogue. The Knowledge Engineering Review, 4(3), 235–247.
47.
Zurück zum Zitat Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. In CoRR. arXiv:1806.00069. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. In CoRR. arXiv:​1806.​00069.
48.
Zurück zum Zitat Goldstein, A., Kapelner, A., Bleich, J., & Pitkin, E. (2015). Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics, 24(1), 44–65.MathSciNet Goldstein, A., Kapelner, A., Bleich, J., & Pitkin, E. (2015). Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics, 24(1), 44–65.MathSciNet
49.
Zurück zum Zitat Goodrich, M. A., Olsen, D. R., Crandall, J. W., & Palmer, T. J. (2001). Experiments in adjustable autonomy. In Proceedings of IJCAI workshop on autonomy, delegation and control: Interacting with intelligent agents (pp. 1624–1629). Seattle, WA: American Association for Artificial Intelligence Press. Goodrich, M. A., Olsen, D. R., Crandall, J. W., & Palmer, T. J. (2001). Experiments in adjustable autonomy. In Proceedings of IJCAI workshop on autonomy, delegation and control: Interacting with intelligent agents (pp. 1624–1629). Seattle, WA: American Association for Artificial Intelligence Press.
50.
Zurück zum Zitat Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23, 497–530. Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23, 497–530.
51.
Zurück zum Zitat Grudin, J. (1989). The case against user interface consistency. Communications of the ACM, 32(10), 1164–1173. Grudin, J. (1989). The case against user interface consistency. Communications of the ACM, 32(10), 1164–1173.
52.
Zurück zum Zitat Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 93:1–93:42. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 93:1–93:42.
53.
Zurück zum Zitat Gunning, D. (2017). Explainable artificial intelligence (XAI). Arlington: Defense Advanced Research Projects Agency (DARPA). Gunning, D. (2017). Explainable artificial intelligence (XAI). Arlington: Defense Advanced Research Projects Agency (DARPA).
54.
Zurück zum Zitat Guo, C., & Zhang, L. (2010). A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Transactions on Image Processing, 19(1), 185–198.MathSciNetMATH Guo, C., & Zhang, L. (2010). A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Transactions on Image Processing, 19(1), 185–198.MathSciNetMATH
55.
Zurück zum Zitat Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182.MATH Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182.MATH
56.
Zurück zum Zitat Hall, M. A. (1999). Correlation-based feature selection for machine learning. Technical report, The University of Waikato. Hall, M. A. (1999). Correlation-based feature selection for machine learning. Technical report, The University of Waikato.
58.
Zurück zum Zitat Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): Results of empirical and theoretical research. In P. A. Hancock & N. Meshkati (Eds.), Advances in psychology (Vol. 52, pp. 139–183). Amsterdam: Elsevier. Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): Results of empirical and theoretical research. In P. A. Hancock & N. Meshkati (Eds.), Advances in psychology (Vol. 52, pp. 139–183). Amsterdam: Elsevier.
59.
Zurück zum Zitat Hendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. In Proceedings of the European conference on computer vision (ECCV) (pp. 3–19). Hendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. In Proceedings of the European conference on computer vision (ECCV) (pp. 3–19).
60.
Zurück zum Zitat Hoffman, R. R., & Klein, G. (2017). Explaining explanation, part 1: Theoretical foundations. IEEE Intelligent Systems, 3, 68–73. Hoffman, R. R., & Klein, G. (2017). Explaining explanation, part 1: Theoretical foundations. IEEE Intelligent Systems, 3, 68–73.
61.
Zurück zum Zitat Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923. Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:​1712.​09923.
62.
Zurück zum Zitat Hooker, G. (2004). Discovering additive structure in black box functions. In Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 575–580). Hooker, G. (2004). Discovering additive structure in black box functions. In Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 575–580).
63.
Zurück zum Zitat Hu, R., Andreas, J., Darrell, T., & Saenko, K. (2018). Explainable neural computation via stack neural module networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 53–69). Hu, R., Andreas, J., Darrell, T., & Saenko, K. (2018). Explainable neural computation via stack neural module networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 53–69).
64.
Zurück zum Zitat Jennings, N. R., Moreau, L., Nicholson, D., Ramchurn, S., Roberts, S., Rodden, T., et al. (2014). Human–agent collectives. Communications of the ACM, 57(12), 80–88. Jennings, N. R., Moreau, L., Nicholson, D., Ramchurn, S., Roberts, S., Rodden, T., et al. (2014). Human–agent collectives. Communications of the ACM, 57(12), 80–88.
65.
Zurück zum Zitat Johansson, U., & Niklasson, L. (2009). Evolving decision trees using oracle guides. In IEEE symposium on computational intelligence and data mining (pp. 238–244). Johansson, U., & Niklasson, L. (2009). Evolving decision trees using oracle guides. In IEEE symposium on computational intelligence and data mining (pp. 238–244).
66.
Zurück zum Zitat Kattan, M. W., Beck, J. R., Bratko, I., Zupan, B., & Demsar, J. (2000). Machine learning for survival analysis: A case study on recurrence of prostate cancer. Artificial Intelligence in Medicine, 20(1), 59–75. Kattan, M. W., Beck, J. R., Bratko, I., Zupan, B., & Demsar, J. (2000). Machine learning for survival analysis: A case study on recurrence of prostate cancer. Artificial Intelligence in Medicine, 20(1), 59–75.
67.
Zurück zum Zitat Kahramanli, H., & Allahverdi, N. (2009). Rule extraction from trained adaptive neural networks using artificial immune systems. Expert Systems with Applications, 36(2), 1513–1522. Kahramanli, H., & Allahverdi, N. (2009). Rule extraction from trained adaptive neural networks using artificial immune systems. Expert Systems with Applications, 36(2), 1513–1522.
68.
Zurück zum Zitat Katafigiotis, I., Sabler, I., Heifetz, E., Rosenfeld, A., Sfoungaristos, S., Lorber, A., et al. (2018). “Stone-less” or negative ureteroscopy: A reality in the endourologic routine or avoidable source of frustration? Estimating the risk factors for a negative ureteroscopy. Journal of Endourology, 32(9), 825–830. Katafigiotis, I., Sabler, I., Heifetz, E., Rosenfeld, A., Sfoungaristos, S., Lorber, A., et al. (2018). “Stone-less” or negative ureteroscopy: A reality in the endourologic routine or avoidable source of frustration? Estimating the risk factors for a negative ureteroscopy. Journal of Endourology, 32(9), 825–830.
69.
Zurück zum Zitat Kim, B., Khanna, R., & Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, & R. Garnett (Eds.), Advances in neural information processing systems 29 (pp. 2280–2288). Curran Associates, Inc. Kim, B., Khanna, R., & Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, & R. Garnett (Eds.), Advances in neural information processing systems 29 (pp. 2280–2288). Curran Associates, Inc.
70.
Zurück zum Zitat Kim, B., Rudin, C., & Shah, J. A. (2014). The Bayesian case model: A generative approach for case-based reasoning and prototype classification. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, & K. Q. Weinberger (Eds.), Advances in neural information processing systems 27 (pp. 1952–1960). Curran Associates, Inc. Kim, B., Rudin, C., & Shah, J. A. (2014). The Bayesian case model: A generative approach for case-based reasoning and prototype classification. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, & K. Q. Weinberger (Eds.), Advances in neural information processing systems 27 (pp. 1952–1960). Curran Associates, Inc.
71.
Zurück zum Zitat Kim, J., Rohrbach, A., Darrell, T., Canny, J., & Akata, Z. (2018). Textual explanations for self-driving vehicles. In Proceedings of the European conference on computer vision (ECCV) (pp. 563–578). Kim, J., Rohrbach, A., Darrell, T., Canny, J., & Akata, Z. (2018). Textual explanations for self-driving vehicles. In Proceedings of the European conference on computer vision (ECCV) (pp. 563–578).
72.
Zurück zum Zitat Kleinerman, A., Rosenfeld, A., & Kraus, S. (2018). Providing explanations for recommendations in reciprocal environments. In Proceedings of the 12th ACM conference on recommender systems (pp. 22–30). ACM. Kleinerman, A., Rosenfeld, A., & Kraus, S. (2018). Providing explanations for recommendations in reciprocal environments. In Proceedings of the 12th ACM conference on recommender systems (pp. 22–30). ACM.
73.
Zurück zum Zitat Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction, 22(4–5), 441–504. Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction, 22(4–5), 441–504.
74.
Zurück zum Zitat Kofod-Petersen, A., Cassens, J., & Aamodt, A. (2008). Explanatory capabilities in the creek knowledge-intensive case-based reasoner. Frontiers in Artificial Intelligence and Applications, 173, 28. Kofod-Petersen, A., Cassens, J., & Aamodt, A. (2008). Explanatory capabilities in the creek knowledge-intensive case-based reasoner. Frontiers in Artificial Intelligence and Applications, 173, 28.
75.
Zurück zum Zitat Kononenko, I. (1993). Inductive and Bayesian learning in medical diagnosis. Applied Artificial Intelligence an International Journal, 7(4), 317–337. Kononenko, I. (1993). Inductive and Bayesian learning in medical diagnosis. Applied Artificial Intelligence an International Journal, 7(4), 317–337.
76.
Zurück zum Zitat Kononenko, I. (1999). Explaining classifications for individual instances. In Proceedings of IJCAI’99 (pp. 722–726). Kononenko, I. (1999). Explaining classifications for individual instances. In Proceedings of IJCAI’99 (pp. 722–726).
77.
Zurück zum Zitat Krause, J., Perer, A., & Ng, K. (2016). Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 5686–5697). Krause, J., Perer, A., & Ng, K. (2016). Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 5686–5697).
78.
Zurück zum Zitat Krishnan, R., Sivakumar, G., & Bhattacharya, P. (1999). Extracting decision trees from trained neural networks. Pattern Recognition, 32(12), 1999–2009. Krishnan, R., Sivakumar, G., & Bhattacharya, P. (1999). Extracting decision trees from trained neural networks. Pattern Recognition, 32(12), 1999–2009.
79.
Zurück zum Zitat Kwon, O. B., & Sadeh, N. (2004). Applying case-based reasoning and multi-agent intelligent system to context-aware comparative shopping. Decision Support Systems, 37(2), 199–213. Kwon, O. B., & Sadeh, N. (2004). Applying case-based reasoning and multi-agent intelligent system to context-aware comparative shopping. Decision Support Systems, 37(2), 199–213.
80.
Zurück zum Zitat Langley, P., Meadows, B., Sridharan, M., & Choi, D. (2017). Explainable agency for intelligent autonomous systems. In AAAI (pp. 4762–4764). Langley, P., Meadows, B., Sridharan, M., & Choi, D. (2017). Explainable agency for intelligent autonomous systems. In AAAI (pp. 4762–4764).
81.
Zurück zum Zitat Last, M., & Maimon, O. (2004). A compact and accurate model for classification. IEEE Transactions on Knowledge and Data Engineering, 16(2), 203–215. Last, M., & Maimon, O. (2004). A compact and accurate model for classification. IEEE Transactions on Knowledge and Data Engineering, 16(2), 203–215.
82.
Zurück zum Zitat Lavrač, N. (1999). Selected techniques for data mining in medicine. Artificial Intelligence in Medicine, 16(1), 3–23.MathSciNet Lavrač, N. (1999). Selected techniques for data mining in medicine. Artificial Intelligence in Medicine, 16(1), 3–23.MathSciNet
83.
Zurück zum Zitat Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
84.
Zurück zum Zitat Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. In Proceedings of the 2016 conference on empirical methods in natural language processing (pp. 107–117). Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. In Proceedings of the 2016 conference on empirical methods in natural language processing (pp. 107–117).
85.
Zurück zum Zitat Letham, B., Rudin, C., McCormick, T. H., Madigan, D., et al. (2015). Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350–1371.MathSciNetMATH Letham, B., Rudin, C., McCormick, T. H., Madigan, D., et al. (2015). Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics, 9(3), 1350–1371.MathSciNetMATH
86.
Zurück zum Zitat Lewicki, R. J., & Bunker, B. B. (1996). Developing and maintaining trust in work relationships. In Trust in organizations: Frontiers of theory and research (pp. 114–139). Sage. Lewicki, R. J., & Bunker, B. B. (1996). Developing and maintaining trust in work relationships. In Trust in organizations: Frontiers of theory and research (pp. 114–139). Sage.
88.
Zurück zum Zitat Lombrozo, T. (2007). Simplicity and probability in causal explanation. Cognitive Psychology, 55(3), 232–257. Lombrozo, T. (2007). Simplicity and probability in causal explanation. Cognitive Psychology, 55(3), 232–257.
89.
Zurück zum Zitat Lou, Y., Caruana, R., & Gehrke, J. (2012). Intelligible models for classification and regression. In The 18th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 150–158). Lou, Y., Caruana, R., & Gehrke, J. (2012). Intelligible models for classification and regression. In The 18th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 150–158).
90.
Zurück zum Zitat Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 623–631). Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013). Accurate intelligible models with pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 623–631).
91.
Zurück zum Zitat Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems (pp. 4768–4777). Curran Associates Inc. Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems (pp. 4768–4777). Curran Associates Inc.
92.
Zurück zum Zitat Michalski, R. S., & Kaufman, K. A. (2001). Learning patterns in noisy data: The AQ approach. In G. Paliouras, V. Karkaletsis, & C. D. Spyropoulos (Eds.), Machine learning and its applications. ACAI 1999. Lecture notes in computer science (Vol. 2049). Berlin, Heidelberg: Springer. Michalski, R. S., & Kaufman, K. A. (2001). Learning patterns in noisy data: The AQ approach. In G. Paliouras, V. Karkaletsis, & C. D. Spyropoulos (Eds.), Machine learning and its applications. ACAI 1999. Lecture notes in computer science (Vol. 2049). Berlin, Heidelberg: Springer.
93.
94.
Zurück zum Zitat Mohamed, M. H. (2011). Rules extraction from constructively trained neural networks based on genetic algorithms. Neurocomputing, 74(17), 3180–3192. Mohamed, M. H. (2011). Rules extraction from constructively trained neural networks based on genetic algorithms. Neurocomputing, 74(17), 3180–3192.
95.
Zurück zum Zitat Mohseni, S., & Ragan, E. D. (2018). A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:1801.05075. Mohseni, S., & Ragan, E. D. (2018). A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:​1801.​05075.
96.
Zurück zum Zitat Montavon, G., Samek, W., & Muller, K. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing: A Review Journal, 73, 1–15, 2.MathSciNet Montavon, G., Samek, W., & Muller, K. (2018). Methods for interpreting and understanding deep neural networks. Digital Signal Processing: A Review Journal, 73, 1–15, 2.MathSciNet
97.
Zurück zum Zitat Mood, C. (2010). Logistic regression: Why we cannot do what we think we can do, and what we can do about it. European Sociological Review, 26(1), 67–82. Mood, C. (2010). Logistic regression: Why we cannot do what we think we can do, and what we can do about it. European Sociological Review, 26(1), 67–82.
98.
Zurück zum Zitat Murphy, P. M., & Pazzani, M. J. (1993). Exploring the decision forest: An empirical investigation of Occam’s razor in decision tree induction. Journal of Artificial Intelligence Research, 1, 257–275.MATH Murphy, P. M., & Pazzani, M. J. (1993). Exploring the decision forest: An empirical investigation of Occam’s razor in decision tree induction. Journal of Artificial Intelligence Research, 1, 257–275.MATH
99.
Zurück zum Zitat Ortony, A., & Partridge, D. (1987). Surprisingness and expectation failure: What’s the difference? In IJCAI (pp. 106–108). Ortony, A., & Partridge, D. (1987). Surprisingness and expectation failure: What’s the difference? In IJCAI (pp. 106–108).
100.
Zurück zum Zitat Ross Quinlan, J. (1986). Induction of decision trees. Machine Learning, 1(1), 81–106. Ross Quinlan, J. (1986). Induction of decision trees. Machine Learning, 1(1), 81–106.
101.
Zurück zum Zitat Rahwan, I., Sonenberg, L., & Dignum, F. (2003). Towards interest-based negotiation. In Proceedings of the second international joint conference on autonomous agents and multiagent systems (pp. 773–780). Rahwan, I., Sonenberg, L., & Dignum, F. (2003). Towards interest-based negotiation. In Proceedings of the second international joint conference on autonomous agents and multiagent systems (pp. 773–780).
102.
Zurück zum Zitat Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you? Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you? Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144).
103.
Zurück zum Zitat Richardson, A., Kraus, S., Weiss, P. L., & Rosenblum, S. (2008). Coach-cumulative online algorithm for classification of handwriting deficiencies. In AAAI (pp. 1725–1730). Richardson, A., Kraus, S., Weiss, P. L., & Rosenblum, S. (2008). Coach-cumulative online algorithm for classification of handwriting deficiencies. In AAAI (pp. 1725–1730).
104.
Zurück zum Zitat Rosenfeld, A., Agmon, N., Maksimov, O., & Kraus, S. (2017). Intelligent agent supporting human-multi-robot team collaboration. Artificial Intelligence, 252, 211–231.MathSciNetMATH Rosenfeld, A., Agmon, N., Maksimov, O., & Kraus, S. (2017). Intelligent agent supporting human-multi-robot team collaboration. Artificial Intelligence, 252, 211–231.MathSciNetMATH
105.
Zurück zum Zitat Rosenfeld, A., & Kraus, S. (2016). Strategical argumentative agent for human persuasion. ECAI, 16, 320–329. Rosenfeld, A., & Kraus, S. (2016). Strategical argumentative agent for human persuasion. ECAI, 16, 320–329.
106.
Zurück zum Zitat Rosenfeld, A., Bareket, Z., Goldman, C. V., Kraus, S., LeBlanc, D. J., & Tsimhoni, O. (2012). Learning driver’s behavior to improve the acceptance of adaptive cruise control. In IAAI. Rosenfeld, A., Bareket, Z., Goldman, C. V., Kraus, S., LeBlanc, D. J., & Tsimhoni, O. (2012). Learning driver’s behavior to improve the acceptance of adaptive cruise control. In IAAI.
107.
Zurück zum Zitat Rosenfeld, A., Bareket, Z., Goldman, C. V., LeBlanc, D. J., & Tsimhoni, O. (2015). Learning drivers’ behavior to improve adaptive cruise control. Journal of Intelligent Transportation Systems, 19(1), 18–31. Rosenfeld, A., Bareket, Z., Goldman, C. V., LeBlanc, D. J., & Tsimhoni, O. (2015). Learning drivers’ behavior to improve adaptive cruise control. Journal of Intelligent Transportation Systems, 19(1), 18–31.
108.
Zurück zum Zitat Rosenfeld, A., Sehgal, V., Graham, D. G., Banks, M. R., Haidry, R. J., & Lovat, L. B. (2014). Using data mining to help detect dysplasia: Extended abstract. In IEEE international conference on software science, technology and engineering (pp. 65–66). Rosenfeld, A., Sehgal, V., Graham, D. G., Banks, M. R., Haidry, R. J., & Lovat, L. B. (2014). Using data mining to help detect dysplasia: Extended abstract. In IEEE international conference on software science, technology and engineering (pp. 65–66).
109.
Zurück zum Zitat Rosenfeld, A., Zuckerman, I., Segal-Halevi, E., Drein, O., & Kraus, S. (2016). NegoChat-A: A chat-based negotiation agent with bounded rationality. Autonomous Agents and Multi-agent Systems, 30(1), 60–81. Rosenfeld, A., Zuckerman, I., Segal-Halevi, E., Drein, O., & Kraus, S. (2016). NegoChat-A: A chat-based negotiation agent with bounded rationality. Autonomous Agents and Multi-agent Systems, 30(1), 60–81.
110.
Zurück zum Zitat Rubio, S., Díaz, E., Martín, J., & Puente, J. M. (2004). Evaluation of subjective mental workload: A comparison of SWAT, NASA-TLX, and workload profile methods. Applied Psychology, 53(1), 61–86. Rubio, S., Díaz, E., Martín, J., & Puente, J. M. (2004). Evaluation of subjective mental workload: A comparison of SWAT, NASA-TLX, and workload profile methods. Applied Psychology, 53(1), 61–86.
111.
Zurück zum Zitat Rudin, C. (2014). Algorithms for interpretable machine learning. In Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1519–1519). Rudin, C. (2014). Algorithms for interpretable machine learning. In Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1519–1519).
112.
Zurück zum Zitat Saeys, Y., Inza, I., & Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19), 2507–2517. Saeys, Y., Inza, I., & Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19), 2507–2517.
113.
Zurück zum Zitat Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K. (2015). Would you trust a (faulty) robot?: Effects of error, task type and personality on human–robot cooperation and trust. In Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction (pp. 141–148). Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K. (2015). Would you trust a (faulty) robot?: Effects of error, task type and personality on human–robot cooperation and trust. In Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction (pp. 141–148).
114.
Zurück zum Zitat Saltelli, A. (2002). Sensitivity analysis for importance assessment. Risk Analysis, 22(3), 579–590. Saltelli, A. (2002). Sensitivity analysis for importance assessment. Risk Analysis, 22(3), 579–590.
115.
Zurück zum Zitat Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:​1708.​08296.
116.
Zurück zum Zitat Scerri, P., Pynadath, D., & Tambe, M. (2001). Adjustable autonomy in real-world multi-agent environments. In Proceedings of the fifth international conference on autonomous agents (pp. 300–307). ACM. Scerri, P., Pynadath, D., & Tambe, M. (2001). Adjustable autonomy in real-world multi-agent environments. In Proceedings of the fifth international conference on autonomous agents (pp. 300–307). ACM.
117.
Zurück zum Zitat Schank, R. C. (1986). Explanation: A first pass. In Experience, memory, and reasoning (pp. 139–165). Yale University. Schank, R. C. (1986). Explanation: A first pass. In Experience, memory, and reasoning (pp. 139–165). Yale University.
118.
Zurück zum Zitat Schetinin, V., Fieldsend, J. E., Partridge, D., Coats, T. J., Krzanowski, W. J., Everson, R. M., et al. (2007). Confident interpretation of Bayesian decision tree ensembles for clinical applications. IEEE Transactions on Information Technology in Biomedicine, 11(3), 312–319. Schetinin, V., Fieldsend, J. E., Partridge, D., Coats, T. J., Krzanowski, W. J., Everson, R. M., et al. (2007). Confident interpretation of Bayesian decision tree ensembles for clinical applications. IEEE Transactions on Information Technology in Biomedicine, 11(3), 312–319.
119.
Zurück zum Zitat Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., et al. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV (pp. 618–626). Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., et al. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV (pp. 618–626).
120.
Zurück zum Zitat Sheh, R. (2017). Why did you do that? Explainable intelligent robots. In AAAI workshop on human–aware artificial intelligence. Sheh, R. (2017). Why did you do that? Explainable intelligent robots. In AAAI workshop on human–aware artificial intelligence.
121.
Zurück zum Zitat Shneiderman, B. (2002). Promoting universal usability with multi-layer interface design. ACM SIGCAPH Computers and the Physically Handicapped, 73–74, 1–8. Shneiderman, B. (2002). Promoting universal usability with multi-layer interface design. ACM SIGCAPH Computers and the Physically Handicapped, 73–74, 1–8.
122.
Zurück zum Zitat Shrot, T., Rosenfeld, A., Golbeck, J., & Kraus, S. (2014). Crisp: An interruption management algorithm based on collaborative filtering. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 3035–3044). Shrot, T., Rosenfeld, A., Golbeck, J., & Kraus, S. (2014). Crisp: An interruption management algorithm based on collaborative filtering. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 3035–3044).
123.
Zurück zum Zitat Shwartz-Ziv, R., & Tishby, N. (2017). Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810. Shwartz-Ziv, R., & Tishby, N. (2017). Opening the black box of deep neural networks via information. arXiv preprint arXiv:​1703.​00810.
124.
Zurück zum Zitat Sierhuis, M., Bradshaw, J. M., Acquisti, A., Van Hoof, R., Jeffers, R., & Uszok, A. (2003). Human–agent teamwork and adjustable autonomy in practice. In Proceedings of the seventh international symposium on artificial intelligence, robotics and automation in space (I-SAIRAS). Sierhuis, M., Bradshaw, J. M., Acquisti, A., Van Hoof, R., Jeffers, R., & Uszok, A. (2003). Human–agent teamwork and adjustable autonomy in practice. In Proceedings of the seventh international symposium on artificial intelligence, robotics and automation in space (I-SAIRAS).
125.
Zurück zum Zitat Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. In CoRR. arXiv:1312.6034. Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. In CoRR. arXiv:​1312.​6034.
126.
Zurück zum Zitat Sørmo, F., & Cassens, J. (2004). Explanation goals in case-based reasoning. In Proceedings of the ECCBR 2004 workshops, number 142-04 (pp. 165–174). Sørmo, F., & Cassens, J. (2004). Explanation goals in case-based reasoning. In Proceedings of the ECCBR 2004 workshops, number 142-04 (pp. 165–174).
127.
Zurück zum Zitat Sørmo, F., Cassens, J., & Aamodt, A. (2005). Explanation in case-based reasoning-perspectives and goals. Artificial Intelligence Review, 24(2), 109–143.MATH Sørmo, F., Cassens, J., & Aamodt, A. (2005). Explanation in case-based reasoning-perspectives and goals. Artificial Intelligence Review, 24(2), 109–143.MATH
128.
Zurück zum Zitat Stein, S., Gerding, E. H., Nedea, A., Rosenfeld, A., & Jennings, N. R. (2017). Market interfaces for electric vehicle charging. Journal of Artificial Intelligence Research, 59, 175–227. Stein, S., Gerding, E. H., Nedea, A., Rosenfeld, A., & Jennings, N. R. (2017). Market interfaces for electric vehicle charging. Journal of Artificial Intelligence Research, 59, 175–227.
129.
Zurück zum Zitat Strumbelj, E., & Kononenko, I. (2010). An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11, 1–18.MathSciNetMATH Strumbelj, E., & Kononenko, I. (2010). An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11, 1–18.MathSciNetMATH
130.
Zurück zum Zitat Tan, H. F., Hooker, G., & Wells, M. T. (2016). Tree space prototypes: Another look at making tree ensembles interpretable. arXiv preprint arXiv:1611.07115. Tan, H. F., Hooker, G., & Wells, M. T. (2016). Tree space prototypes: Another look at making tree ensembles interpretable. arXiv preprint arXiv:​1611.​07115.
131.
Zurück zum Zitat Tolomei, G., Silvestri, F., Haines, A., & Lalmas, M. (2017). Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 465–474). Tolomei, G., Silvestri, F., Haines, A., & Lalmas, M. (2017). Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 465–474).
132.
Zurück zum Zitat Traum, D., Rickel, J., Gratch, J., & Marsella, S. (2003). Negotiation over tasks in hybrid human–agent teams for simulation-based training. In Proceedings of the second international joint conference on autonomous agents and multiagent systems (pp. 441–448). ACM. Traum, D., Rickel, J., Gratch, J., & Marsella, S. (2003). Negotiation over tasks in hybrid human–agent teams for simulation-based training. In Proceedings of the second international joint conference on autonomous agents and multiagent systems (pp. 441–448). ACM.
133.
Zurück zum Zitat Van Fraassen, B. C. (1985). Empiricism in the philosophy of science. In P. M. Churchland & C. A. Hooker (Eds.), Images of science: Essays on realism and empiricism (pp. 245–308). University of Chicago Press. Van Fraassen, B. C. (1985). Empiricism in the philosophy of science. In P. M. Churchland & C. A. Hooker (Eds.), Images of science: Essays on realism and empiricism (pp. 245–308). University of Chicago Press.
134.
Zurück zum Zitat VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221.
135.
Zurück zum Zitat Vellido, A., Romero, E., Julià-Sapé, M., Majós, C., Moreno-Torres, À., Pujol, J., et al. (2012). Robust discrimination of glioblastomas from metastatic brain tumors on the basis of single-voxel 1H MRS. NMR in Biomedicine, 25(6), 819–828. Vellido, A., Romero, E., Julià-Sapé, M., Majós, C., Moreno-Torres, À., Pujol, J., et al. (2012). Robust discrimination of glioblastomas from metastatic brain tumors on the basis of single-voxel 1H MRS. NMR in Biomedicine, 25(6), 819–828.
136.
Zurück zum Zitat Vellido, A., Martín-Guerrero, J. D., & Lisboa, P. J. G. (2012). Making machine learning models interpretable. In ESANN (Vol. 12, pp. 163–172). Vellido, A., Martín-Guerrero, J. D., & Lisboa, P. J. G. (2012). Making machine learning models interpretable. In ESANN (Vol. 12, pp. 163–172).
138.
Zurück zum Zitat Vlek, C. S., Prakken, H., Renooij, S., & Verheij, B. (2016). A method for explaining bayesian networks for legal evidence with scenarios. Artificial Intelligence and Law, 24(3), 285–324. Vlek, C. S., Prakken, H., Renooij, S., & Verheij, B. (2016). A method for explaining bayesian networks for legal evidence with scenarios. Artificial Intelligence and Law, 24(3), 285–324.
139.
Zurück zum Zitat Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., & MacNeille, P. (2017). A bayesian framework for learning rule sets for interpretable classification. The Journal of Machine Learning Research, 18(1), 2357–2393.MathSciNetMATH Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., & MacNeille, P. (2017). A bayesian framework for learning rule sets for interpretable classification. The Journal of Machine Learning Research, 18(1), 2357–2393.MathSciNetMATH
140.
Zurück zum Zitat Whitmore, L. S., George, A., & Hudson, C. M. (2018) Explicating feature contribution using random forest proximity distances. arXiv preprint arXiv:1807.06572. Whitmore, L. S., George, A., & Hudson, C. M. (2018) Explicating feature contribution using random forest proximity distances. arXiv preprint arXiv:​1807.​06572.
141.
Zurück zum Zitat Xiao, B., & Benbasat, I. (2007). E-commerce product recommendation agents: Use, characteristics, and impact. MIS Quarterly, 31(1), 137–209. Xiao, B., & Benbasat, I. (2007). E-commerce product recommendation agents: Use, characteristics, and impact. MIS Quarterly, 31(1), 137–209.
142.
Zurück zum Zitat Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning (pp. 2048–2057). Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning (pp. 2048–2057).
143.
Zurück zum Zitat Yanco, H. A., & Drury, J. (2004). Classifying human–robot interaction: An updated taxonomy. In IEEE international conference on systems, man and cybernetics (Vol. 3, pp. 2841–2846). IEEE. Yanco, H. A., & Drury, J. (2004). Classifying human–robot interaction: An updated taxonomy. In IEEE international conference on systems, man and cybernetics (Vol. 3, pp. 2841–2846). IEEE.
144.
Zurück zum Zitat Yetim, F. (2008). A framework for organizing justifications for strategic use in adaptive interaction contexts. In ECIS (pp. 815–825). Yetim, F. (2008). A framework for organizing justifications for strategic use in adaptive interaction contexts. In ECIS (pp. 815–825).
145.
Zurück zum Zitat Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., & Lipson, H. (2015). Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., & Lipson, H. (2015). Understanding neural networks through deep visualization. arXiv preprint arXiv:​1506.​06579.
146.
Zurück zum Zitat Zhang, Q., & Zhu, S.-C. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27–39. Zhang, Q., & Zhu, S.-C. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27–39.
147.
Zurück zum Zitat Zhang, Y., Wallace, B. (2015). A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820. Zhang, Y., Wallace, B. (2015). A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:​1510.​03820.
148.
Zurück zum Zitat Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921–2929). Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921–2929).
149.
150.
Zurück zum Zitat Zhou, Z.-H., Jiang, Y., & Chen, S.-F. (2003). Extracting symbolic rules from trained neural network ensembles. Ai Communications, 16(1), 3–15.MATH Zhou, Z.-H., Jiang, Y., & Chen, S.-F. (2003). Extracting symbolic rules from trained neural network ensembles. Ai Communications, 16(1), 3–15.MATH
Metadaten
Titel
Explainability in human–agent systems
verfasst von
Avi Rosenfeld
Ariella Richardson
Publikationsdatum
13.05.2019
Verlag
Springer US
Erschienen in
Autonomous Agents and Multi-Agent Systems / Ausgabe 6/2019
Print ISSN: 1387-2532
Elektronische ISSN: 1573-7454
DOI
https://doi.org/10.1007/s10458-019-09408-y