Skip to main content
Erschienen in: Minds and Machines 3/2019

29.05.2019 | Original Article

The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

verfasst von: Andrés Páez

Erschienen in: Minds and Machines | Ausgabe 3/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
For a survey of recent characterizations of the goals of XAI, see Lipton (2016), Doshi-Velez and Kim (2017), Samek et al. (2017) and Gilpin et al. (2019).
 
2
I will use decision as the general term to encompass outputs from AI models, such as predictions, categorizations, action selection, etc.
 
3
Needless to say, each of these differences has been the subject of great philosophical controversy. I am simply reporting some of the reasons that have been stated in the literature to motivate the analysis of understanding as an independent concept.
 
4
Here I will not evaluate the merits of this thesis in the philosophy of science. For a discussion, see the collection edited by De Regt et al. (2009).
 
5
More precisely, the explanans-statement and the explanandum-statement must be true. If one holds, following Lewis (1986) and Woodward (2003), that the relata of the explanation relation are particulars, i.e., things or events, the claim amounts to saying that the things or events occurring in both the explanans and the explanandum position exist or occur.
 
6
This list is not meant to be exhaustive and it excludes pragmatic theories of explanation such as the ones defended by Achinstein (1983) and van Fraassen (1980). I have argued elsewhere (Páez 2006) that these theories offer an account of explanation that lacks any sort of objectivity.
 
7
Salmon’s (1971) reference class rule, for example, requires the probabilistic (causal) context of a single event to be complete to avoid any epistemic relativity.
 
8
I am grateful to an anonymous reviewer for pointing this out.
 
9
The relaxation of the factivity condition is often defended in the context of objectual understanding, but it remains controversial in the case of understanding why. I return to this distinction in Sect. 4.
 
10
De Graaf and Malle (2017) have also emphasized the importance of these pragmatic factors: “The success of an explanation therefore depends on several critical audience factors—assumptions, knowledge, and interests that an audience has when decoding the explanation” (p. 19).
 
11
See also De Graaf and Malle (2017), Miller (2019) and Mittelstadt et al. (2019).
 
12
See Falcone and Castelfranchi (2001) for a critique of the use of decision theory to understand trust in virtual environments.
 
13
Philosophers of science are much more inclined to accept this view than epistemologists, who have fiercely resisted it. See, for example, Zagzebski (2001), Kvanvig (2003), Elgin (2004) and Pritchard (2014). I do not have space to discuss the issue here, but from the text it should be clear that I side with the epistemologists.
 
14
A direct understanding of a phenomenon would be factive, based on a literal description of the explanatory elements involved. It is in this sense that models offer an indirect path towards objective understanding.
 
15
In Allahyari and Lavesson (2011), 100 non-expert users were asked to compare the understandability of decision trees and rule lists. The former method was deemed more understandable. Freitas (2014) examines the pros and cons of decision trees, classification rules, decision tables, nearest neighbors, and Bayesian network classifiers with respect to their interpretability, and discusses how to improve the comprehensibility of classification models in general. More recently, Fürnkranz et al. (2018) performed an experiment with 390 participants to question the idea that the likeliness that a user will accept a logical model such as rule sets as an explanation for a decision is determined by the simplicity of the model. Lage et al. (2019) also explore the complexities of rule sets to find features that make them more interpretable, while Piltaver et al. (2016) undertake a similar analysis in the case of classification trees. Another important aspect of this empirical line of research is the study of cognitive biases in the understanding of interpretable models. Kliegr et al. (2018) study the possible effects of biases on symbolic machine learning models.
 
16
As noted in the Introduction, none of these methods is intrinsically interpretable.
 
17
A terminological clarification is in order. Mittelstadt et al. (2019) and other researchers in XAI use the phrase “contrastive explanations” to refer to counterfactuals. But these are two very different things. In philosophy, an explanation is contrastive if it answers the question “Why p rather than q?” instead of just “Why p?” In either case the explanation provided must be factual. To turn it into a counterfactual situation, the question must be changed to: “What changes in the world would have brought about q instead of p?” And the answer will be a hypothetical or counterfactual statement, not an explanation.
 
18
To be sure, there are many scenarios where both the owner and the user (but not the developer) of the model will be satisfied with its accurate decisions without feeling the need to have an objectual understand of it. Think of the books recommended by Amazon or the movies suggested by Netflix using the simple rule: “If you liked x, you might like y.” As I argued in Sect. 2, the relation between understanding and trust is always mediated by the interests, goals, resources, and degree of risk aversion of stakeholders. In these cases, the cost–benefit relation makes it unnecessary to make the additional effort of looking for mechanisms.
 
Literatur
Zurück zum Zitat Achinstein, P. (1983). The nature of explanation. New York: Oxford University Press. Achinstein, P. (1983). The nature of explanation. New York: Oxford University Press.
Zurück zum Zitat Allahyari, H., & Lavesson, N. (2011). User-oriented assessment of classification model understandability. In Proceedings of the 11th Scandinavian conference on artificial intelligence. Amsterdam: IOS Press. Allahyari, H., & Lavesson, N. (2011). User-oriented assessment of classification model understandability. In Proceedings of the 11th Scandinavian conference on artificial intelligence. Amsterdam: IOS Press.
Zurück zum Zitat Carter, J. A., & Gordon, E. C. (2016). Objectual understanding, factivity and belief. In M. Grajner & P. Schmechtig (Eds.), Epistemic reasons, norms and goals (pp. 423–442). Berlin: De Gruyter. Carter, J. A., & Gordon, E. C. (2016). Objectual understanding, factivity and belief. In M. Grajner & P. Schmechtig (Eds.), Epistemic reasons, norms and goals (pp. 423–442). Berlin: De Gruyter.
Zurück zum Zitat Caruana, R., Kangarloo, H., Dionisio, J. D. N., Sinha, U., & Johnson, D. (1999). Case-based explanations of non-case-based learning methods. In Proceedings of the AMIA symposium (p. 212). American Medical Informatics Association. Caruana, R., Kangarloo, H., Dionisio, J. D. N., Sinha, U., & Johnson, D. (1999). Case-based explanations of non-case-based learning methods. In Proceedings of the AMIA symposium (p. 212). American Medical Informatics Association.
Zurück zum Zitat Darwin, C. (1860/1903). Letter to Henslow, May 1860. In F. Darwin (Ed.), More letters of Charles Darwin (Vol. I). New York: D. Appleton. Darwin, C. (1860/1903). Letter to Henslow, May 1860. In F. Darwin (Ed.), More letters of Charles Darwin (Vol. I). New York: D. Appleton.
Zurück zum Zitat De Graaf, M. M., & Malle, B. F. (2017). How people explain action (and autonomous intelligent systems should too). In AAAI fall symposium on artificial intelligence for human–robot interaction (pp. 19–26). Palo Alto: The AAAI Press. De Graaf, M. M., & Malle, B. F. (2017). How people explain action (and autonomous intelligent systems should too). In AAAI fall symposium on artificial intelligence for humanrobot interaction (pp. 19–26). Palo Alto: The AAAI Press.
Zurück zum Zitat de Regt, H. W., & Dieks, D. (2005). A contextual approach to scientific understanding. Synthese, 144, 137–170.CrossRef de Regt, H. W., & Dieks, D. (2005). A contextual approach to scientific understanding. Synthese, 144, 137–170.CrossRef
Zurück zum Zitat de Regt, H. W., Leonelli, S., & Eigner, K. (Eds.). (2009). Scientific understanding: Philosophical perspectives. Pittsburgh: University of Pittsburgh Press. de Regt, H. W., Leonelli, S., & Eigner, K. (Eds.). (2009). Scientific understanding: Philosophical perspectives. Pittsburgh: University of Pittsburgh Press.
Zurück zum Zitat Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:​1702.​08608.
Zurück zum Zitat Ehsan, U., Harrison, B., Chan, L., & Riedl, M. O. (2018). Rationalization: A neural machine translation approach to generating natural language explanations. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 81–87). New York: ACM. Ehsan, U., Harrison, B., Chan, L., & Riedl, M. O. (2018). Rationalization: A neural machine translation approach to generating natural language explanations. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 81–87). New York: ACM.
Zurück zum Zitat Elgin, C. Z. (2004). True enough. Philosophical Issues, 14, 113–131.CrossRef Elgin, C. Z. (2004). True enough. Philosophical Issues, 14, 113–131.CrossRef
Zurück zum Zitat Elgin, C. Z. (2007). Understanding and the facts. Philosophical Studies, 132, 33–42.CrossRef Elgin, C. Z. (2007). Understanding and the facts. Philosophical Studies, 132, 33–42.CrossRef
Zurück zum Zitat Elgin, C. Z. (2008). Exemplification, idealization, and scientific understanding. In M. Suárez (Ed.), Fictions in science: Philosophical essays on modelling and idealization (pp. 77–90). London: Routledge. Elgin, C. Z. (2008). Exemplification, idealization, and scientific understanding. In M. Suárez (Ed.), Fictions in science: Philosophical essays on modelling and idealization (pp. 77–90). London: Routledge.
Zurück zum Zitat Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In C. Castelfranchi, & Tan, Y.-H. (Eds.), Trust and deception in virtual societies (pp. 55–90). Dordrecht: Springer.MATHCrossRef Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In C. Castelfranchi, & Tan, Y.-H. (Eds.), Trust and deception in virtual societies (pp. 55–90). Dordrecht: Springer.MATHCrossRef
Zurück zum Zitat Freitas, A. A. (2014). Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter, 15(1), 1–10.CrossRef Freitas, A. A. (2014). Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter, 15(1), 1–10.CrossRef
Zurück zum Zitat Fürnkranz, J., Kliegr, T., & Paulheim, H. (2018). On cognitive preferences and the plausibility of rule-based models. arXiv preprint arXiv:1803.01316. Fürnkranz, J., Kliegr, T., & Paulheim, H. (2018). On cognitive preferences and the plausibility of rule-based models. arXiv preprint arXiv:​1803.​01316.
Zurück zum Zitat Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2019). Explaining explanation. An overview of interpretability of machine learning. arXiv preprint arXiv:1806.00069v3. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2019). Explaining explanation. An overview of interpretability of machine learning. arXiv preprint arXiv:​1806.​00069v3.
Zurück zum Zitat Greco, J. (2010). Achieving knowledge. Cambridge: Cambridge University Press.CrossRef Greco, J. (2010). Achieving knowledge. Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat Greco, J. (2012). Intellectual virtues and their place in philosophy. In C. Jäger & W. Löffler (Eds.), Epistemology: Contexts, values, disagreement: Proceedings of the 34th international Wittgenstein symposium (pp. 117–130). Heusenstamm: Ontos. Greco, J. (2012). Intellectual virtues and their place in philosophy. In C. Jäger & W. Löffler (Eds.), Epistemology: Contexts, values, disagreement: Proceedings of the 34th international Wittgenstein symposium (pp. 117–130). Heusenstamm: Ontos.
Zurück zum Zitat Grimm, S. R. (2006). Is understanding a species of knowledge? British Journal for the Philosophy of Science, 57, 515–535.CrossRef Grimm, S. R. (2006). Is understanding a species of knowledge? British Journal for the Philosophy of Science, 57, 515–535.CrossRef
Zurück zum Zitat Grimm, S. R. (2011). Understanding. In S. Bernecker & D. Pritchard (Eds.), The Routledge companion to epistemology (pp. 84–94). New York: Routledge. Grimm, S. R. (2011). Understanding. In S. Bernecker & D. Pritchard (Eds.), The Routledge companion to epistemology (pp. 84–94). New York: Routledge.
Zurück zum Zitat Grimm, S. R. (2014). Understanding as knowledge of causes. In A. Fairweather (Ed.), Virtue epistemology naturalized: Bridges between virtue epistemology and philosophy of science. Dordrecht: Springer. Grimm, S. R. (2014). Understanding as knowledge of causes. In A. Fairweather (Ed.), Virtue epistemology naturalized: Bridges between virtue epistemology and philosophy of science. Dordrecht: Springer.
Zurück zum Zitat Grimm, S. R. (Ed.). (2018). Making sense of the world: New essays on the philosophy of understanding. New York: Oxford University Press. Grimm, S. R. (Ed.). (2018). Making sense of the world: New essays on the philosophy of understanding. New York: Oxford University Press.
Zurück zum Zitat Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), Article 93. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), Article 93.
Zurück zum Zitat Hempel, C. G. (1965). Aspects of scientific explanation. New York: The Free Press. Hempel, C. G. (1965). Aspects of scientific explanation. New York: The Free Press.
Zurück zum Zitat Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141–154.CrossRef Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141–154.CrossRef
Zurück zum Zitat Kelemen, D. (1999). Functions, goals, and intentions: Children’s teleological reasoning about objects. Trends in Cognitive Science, 12, 461–468.CrossRef Kelemen, D. (1999). Functions, goals, and intentions: Children’s teleological reasoning about objects. Trends in Cognitive Science, 12, 461–468.CrossRef
Zurück zum Zitat Khalifa, K. (2012). Inaugurating understanding or repackaging explanation. Philosophy of Science, 79, 15–37.CrossRef Khalifa, K. (2012). Inaugurating understanding or repackaging explanation. Philosophy of Science, 79, 15–37.CrossRef
Zurück zum Zitat Kim, B. (2015). Interactive and interpretable machine learning models for human machine collaboration. Ph.D. thesis, Massachusetts Institute of Technology. Kim, B. (2015). Interactive and interpretable machine learning models for human machine collaboration. Ph.D. thesis, Massachusetts Institute of Technology.
Zurück zum Zitat Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. arXiv preprint arXiv:1804.02969. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. arXiv preprint arXiv:​1804.​02969.
Zurück zum Zitat Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., & Thomaz, A. (2016). Learning from explanations using sentiment and advice in RL. IEEE Transactions on Cognitive and Developmental Systems, 9(1), 44–55.CrossRef Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., & Thomaz, A. (2016). Learning from explanations using sentiment and advice in RL. IEEE Transactions on Cognitive and Developmental Systems, 9(1), 44–55.CrossRef
Zurück zum Zitat Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. New York: Cambridge University Press.CrossRef Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. New York: Cambridge University Press.CrossRef
Zurück zum Zitat Kvanvig, J. (2009). Response to critics. In A. Haddock, A. Millar, & D. Pritchard (Eds.), Epistemic value (pp. 339–351). New York: Oxford University Press. Kvanvig, J. (2009). Response to critics. In A. Haddock, A. Millar, & D. Pritchard (Eds.), Epistemic value (pp. 339–351). New York: Oxford University Press.
Zurück zum Zitat Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., et al. (2019). An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006. Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., et al. (2019). An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:​1902.​00006.
Zurück zum Zitat Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature Communications, 10(1), 1096.CrossRef Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature Communications, 10(1), 1096.CrossRef
Zurück zum Zitat Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2017). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31, 611–627.CrossRef Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2017). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31, 611–627.CrossRef
Zurück zum Zitat Lewis, D. K. (1986). Causal explanation. In D. K. Lewis (Ed.), Philosophical papers (Vol. II, pp. 214–240). New York: Oxford University Press. Lewis, D. K. (1986). Causal explanation. In D. K. Lewis (Ed.), Philosophical papers (Vol. II, pp. 214–240). New York: Oxford University Press.
Zurück zum Zitat Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives (pp. 43–63). Pittsburgh: University of Pittsburgh Press.CrossRef Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives (pp. 43–63). Pittsburgh: University of Pittsburgh Press.CrossRef
Zurück zum Zitat Lombrozo, T., & Gwynne, N. Z. (2014). Explanation and inference: Mechanistic and functional explanations guide property generalization. Frontiers in Human Neuroscience, 8, 700.CrossRef Lombrozo, T., & Gwynne, N. Z. (2014). Explanation and inference: Mechanistic and functional explanations guide property generalization. Frontiers in Human Neuroscience, 8, 700.CrossRef
Zurück zum Zitat Lombrozo, T., & Wilkenfeld, D. A. (forthcoming). Mechanistic vs. functional understanding. In S. R. Grimm (Ed.), Varieties of understanding: New perspectives from philosophy, psychology, and theology. New York: Oxford University Press. Lombrozo, T., & Wilkenfeld, D. A. (forthcoming). Mechanistic vs. functional understanding. In S. R. Grimm (Ed.), Varieties of understanding: New perspectives from philosophy, psychology, and theology. New York: Oxford University Press.
Zurück zum Zitat McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on recommender systems (pp. 165–172). New York: ACM. McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on recommender systems (pp. 165–172). New York: ACM.
Zurück zum Zitat Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–18.MathSciNetMATHCrossRef Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–18.MathSciNetMATHCrossRef
Zurück zum Zitat Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279–288). New York: ACM. Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279–288). New York: ACM.
Zurück zum Zitat Mizrahi, M. (2012). Idealizations and scientific understanding. Philosophical Studies, 160, 237–252.CrossRef Mizrahi, M. (2012). Idealizations and scientific understanding. Philosophical Studies, 160, 237–252.CrossRef
Zurück zum Zitat Páez, A. (2006). Explanations in K. An analysis of explanation as a belief revision operation. Oberhausen: Athena Verlag. Páez, A. (2006). Explanations in K. An analysis of explanation as a belief revision operation. Oberhausen: Athena Verlag.
Zurück zum Zitat Páez, A. (2009). Artificial explanations: the epistemological interpretation of explanation in AI. Synthese, 170, 131–146.MathSciNetMATHCrossRef Páez, A. (2009). Artificial explanations: the epistemological interpretation of explanation in AI. Synthese, 170, 131–146.MathSciNetMATHCrossRef
Zurück zum Zitat Pazzani, M. (2000). Knowledge discovery from data? IEEE Intelligent Systems, 15(2), 10–13.CrossRef Pazzani, M. (2000). Knowledge discovery from data? IEEE Intelligent Systems, 15(2), 10–13.CrossRef
Zurück zum Zitat Piltaver, R., Luštrek, M., Gams, M., & Martinčić-Ipšić, S. (2016). What makes classification trees comprehensible? Expert Systems with Applications: An International Journal, 62(C), 333–346.CrossRef Piltaver, R., Luštrek, M., Gams, M., & Martinčić-Ipšić, S. (2016). What makes classification trees comprehensible? Expert Systems with Applications: An International Journal, 62(C), 333–346.CrossRef
Zurück zum Zitat Potochnik, A. (2017). Idealization and the aims of science. Chicago: University of Chicago Press.CrossRef Potochnik, A. (2017). Idealization and the aims of science. Chicago: University of Chicago Press.CrossRef
Zurück zum Zitat Pritchard, D. (2008). Knowing the answer, Understanding and epistemic value. Grazer Philosophische Studien, 77, 325–339.CrossRef Pritchard, D. (2008). Knowing the answer, Understanding and epistemic value. Grazer Philosophische Studien, 77, 325–339.CrossRef
Zurück zum Zitat Pritchard, D. (2014). Knowledge and understanding. In A. Fairweather (Ed.), Virtue scientia: Bridges between virtue epistemology and philosophy of science (pp. 315–328). Dordrecht: Springer.CrossRef Pritchard, D. (2014). Knowledge and understanding. In A. Fairweather (Ed.), Virtue scientia: Bridges between virtue epistemology and philosophy of science (pp. 315–328). Dordrecht: Springer.CrossRef
Zurück zum Zitat Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (Eds.). (2009). Dataset shift in machine learning. Cambridge: MIT Press. Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (Eds.). (2009). Dataset shift in machine learning. Cambridge: MIT Press.
Zurück zum Zitat Reiss, J. (2012). The explanation paradox. Journal of Economic Methodology, 19, 43–62.CrossRef Reiss, J. (2012). The explanation paradox. Journal of Economic Methodology, 19, 43–62.CrossRef
Zurück zum Zitat Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). New York: ACM. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). New York: ACM.
Zurück zum Zitat Salmon, W. C. (1971). Statistical explanation. In W. C. Salmon (Ed.), Statistical explanation and statistical relevance. Pittsburgh: Pittsburgh University Press.MATHCrossRef Salmon, W. C. (1971). Statistical explanation. In W. C. Salmon (Ed.), Statistical explanation and statistical relevance. Pittsburgh: Pittsburgh University Press.MATHCrossRef
Zurück zum Zitat Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton: Princeton University Press. Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton: Princeton University Press.
Zurück zum Zitat Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:​1708.​08296.
Zurück zum Zitat Strevens, M. (2013). No understanding without explanation. Studies in the History and Philosophy of Science, 44, 510–515.CrossRef Strevens, M. (2013). No understanding without explanation. Studies in the History and Philosophy of Science, 44, 510–515.CrossRef
Zurück zum Zitat van Fraassen, B. (1980). The scientific image. Oxford: Clarendon Press.CrossRef van Fraassen, B. (1980). The scientific image. Oxford: Clarendon Press.CrossRef
Zurück zum Zitat Wilkenfeld, D. (2013). Understanding as representation manipulability. Synthese, 190, 997–1016.CrossRef Wilkenfeld, D. (2013). Understanding as representation manipulability. Synthese, 190, 997–1016.CrossRef
Zurück zum Zitat Woodward, J. (2003). Making things happen. A theory of causal explanation. New York: Oxford University Press. Woodward, J. (2003). Making things happen. A theory of causal explanation. New York: Oxford University Press.
Zurück zum Zitat Zagzebski, L. (2001). Recovering understanding. In M. Steup (Ed.), Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue. New York: Oxford University Press. Zagzebski, L. (2001). Recovering understanding. In M. Steup (Ed.), Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue. New York: Oxford University Press.
Zurück zum Zitat Zagzebski, L. (2009). On epistemology. Belmont: Wadsworth. Zagzebski, L. (2009). On epistemology. Belmont: Wadsworth.
Zurück zum Zitat Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In 13th European conference on computer vision ECCV 2014 (pp. 818–833). Cham: Springer. Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In 13th European conference on computer vision ECCV 2014 (pp. 818–833). Cham: Springer.
Metadaten
Titel
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
verfasst von
Andrés Páez
Publikationsdatum
29.05.2019
Verlag
Springer Netherlands
Erschienen in
Minds and Machines / Ausgabe 3/2019
Print ISSN: 0924-6495
Elektronische ISSN: 1572-8641
DOI
https://doi.org/10.1007/s11023-019-09502-w

Weitere Artikel der Ausgabe 3/2019

Minds and Machines 3/2019 Zur Ausgabe