Skip to main content
Top
Published in: Minds and Machines 3/2019

29-05-2019 | Original Article

The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

Author: Andrés Páez

Published in: Minds and Machines | Issue 3/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
For a survey of recent characterizations of the goals of XAI, see Lipton (2016), Doshi-Velez and Kim (2017), Samek et al. (2017) and Gilpin et al. (2019).
 
2
I will use decision as the general term to encompass outputs from AI models, such as predictions, categorizations, action selection, etc.
 
3
Needless to say, each of these differences has been the subject of great philosophical controversy. I am simply reporting some of the reasons that have been stated in the literature to motivate the analysis of understanding as an independent concept.
 
4
Here I will not evaluate the merits of this thesis in the philosophy of science. For a discussion, see the collection edited by De Regt et al. (2009).
 
5
More precisely, the explanans-statement and the explanandum-statement must be true. If one holds, following Lewis (1986) and Woodward (2003), that the relata of the explanation relation are particulars, i.e., things or events, the claim amounts to saying that the things or events occurring in both the explanans and the explanandum position exist or occur.
 
6
This list is not meant to be exhaustive and it excludes pragmatic theories of explanation such as the ones defended by Achinstein (1983) and van Fraassen (1980). I have argued elsewhere (Páez 2006) that these theories offer an account of explanation that lacks any sort of objectivity.
 
7
Salmon’s (1971) reference class rule, for example, requires the probabilistic (causal) context of a single event to be complete to avoid any epistemic relativity.
 
8
I am grateful to an anonymous reviewer for pointing this out.
 
9
The relaxation of the factivity condition is often defended in the context of objectual understanding, but it remains controversial in the case of understanding why. I return to this distinction in Sect. 4.
 
10
De Graaf and Malle (2017) have also emphasized the importance of these pragmatic factors: “The success of an explanation therefore depends on several critical audience factors—assumptions, knowledge, and interests that an audience has when decoding the explanation” (p. 19).
 
11
See also De Graaf and Malle (2017), Miller (2019) and Mittelstadt et al. (2019).
 
12
See Falcone and Castelfranchi (2001) for a critique of the use of decision theory to understand trust in virtual environments.
 
13
Philosophers of science are much more inclined to accept this view than epistemologists, who have fiercely resisted it. See, for example, Zagzebski (2001), Kvanvig (2003), Elgin (2004) and Pritchard (2014). I do not have space to discuss the issue here, but from the text it should be clear that I side with the epistemologists.
 
14
A direct understanding of a phenomenon would be factive, based on a literal description of the explanatory elements involved. It is in this sense that models offer an indirect path towards objective understanding.
 
15
In Allahyari and Lavesson (2011), 100 non-expert users were asked to compare the understandability of decision trees and rule lists. The former method was deemed more understandable. Freitas (2014) examines the pros and cons of decision trees, classification rules, decision tables, nearest neighbors, and Bayesian network classifiers with respect to their interpretability, and discusses how to improve the comprehensibility of classification models in general. More recently, Fürnkranz et al. (2018) performed an experiment with 390 participants to question the idea that the likeliness that a user will accept a logical model such as rule sets as an explanation for a decision is determined by the simplicity of the model. Lage et al. (2019) also explore the complexities of rule sets to find features that make them more interpretable, while Piltaver et al. (2016) undertake a similar analysis in the case of classification trees. Another important aspect of this empirical line of research is the study of cognitive biases in the understanding of interpretable models. Kliegr et al. (2018) study the possible effects of biases on symbolic machine learning models.
 
16
As noted in the Introduction, none of these methods is intrinsically interpretable.
 
17
A terminological clarification is in order. Mittelstadt et al. (2019) and other researchers in XAI use the phrase “contrastive explanations” to refer to counterfactuals. But these are two very different things. In philosophy, an explanation is contrastive if it answers the question “Why p rather than q?” instead of just “Why p?” In either case the explanation provided must be factual. To turn it into a counterfactual situation, the question must be changed to: “What changes in the world would have brought about q instead of p?” And the answer will be a hypothetical or counterfactual statement, not an explanation.
 
18
To be sure, there are many scenarios where both the owner and the user (but not the developer) of the model will be satisfied with its accurate decisions without feeling the need to have an objectual understand of it. Think of the books recommended by Amazon or the movies suggested by Netflix using the simple rule: “If you liked x, you might like y.” As I argued in Sect. 2, the relation between understanding and trust is always mediated by the interests, goals, resources, and degree of risk aversion of stakeholders. In these cases, the cost–benefit relation makes it unnecessary to make the additional effort of looking for mechanisms.
 
Literature
go back to reference Achinstein, P. (1983). The nature of explanation. New York: Oxford University Press. Achinstein, P. (1983). The nature of explanation. New York: Oxford University Press.
go back to reference Allahyari, H., & Lavesson, N. (2011). User-oriented assessment of classification model understandability. In Proceedings of the 11th Scandinavian conference on artificial intelligence. Amsterdam: IOS Press. Allahyari, H., & Lavesson, N. (2011). User-oriented assessment of classification model understandability. In Proceedings of the 11th Scandinavian conference on artificial intelligence. Amsterdam: IOS Press.
go back to reference Carter, J. A., & Gordon, E. C. (2016). Objectual understanding, factivity and belief. In M. Grajner & P. Schmechtig (Eds.), Epistemic reasons, norms and goals (pp. 423–442). Berlin: De Gruyter. Carter, J. A., & Gordon, E. C. (2016). Objectual understanding, factivity and belief. In M. Grajner & P. Schmechtig (Eds.), Epistemic reasons, norms and goals (pp. 423–442). Berlin: De Gruyter.
go back to reference Caruana, R., Kangarloo, H., Dionisio, J. D. N., Sinha, U., & Johnson, D. (1999). Case-based explanations of non-case-based learning methods. In Proceedings of the AMIA symposium (p. 212). American Medical Informatics Association. Caruana, R., Kangarloo, H., Dionisio, J. D. N., Sinha, U., & Johnson, D. (1999). Case-based explanations of non-case-based learning methods. In Proceedings of the AMIA symposium (p. 212). American Medical Informatics Association.
go back to reference Darwin, C. (1860/1903). Letter to Henslow, May 1860. In F. Darwin (Ed.), More letters of Charles Darwin (Vol. I). New York: D. Appleton. Darwin, C. (1860/1903). Letter to Henslow, May 1860. In F. Darwin (Ed.), More letters of Charles Darwin (Vol. I). New York: D. Appleton.
go back to reference De Graaf, M. M., & Malle, B. F. (2017). How people explain action (and autonomous intelligent systems should too). In AAAI fall symposium on artificial intelligence for human–robot interaction (pp. 19–26). Palo Alto: The AAAI Press. De Graaf, M. M., & Malle, B. F. (2017). How people explain action (and autonomous intelligent systems should too). In AAAI fall symposium on artificial intelligence for humanrobot interaction (pp. 19–26). Palo Alto: The AAAI Press.
go back to reference de Regt, H. W., & Dieks, D. (2005). A contextual approach to scientific understanding. Synthese, 144, 137–170.CrossRef de Regt, H. W., & Dieks, D. (2005). A contextual approach to scientific understanding. Synthese, 144, 137–170.CrossRef
go back to reference de Regt, H. W., Leonelli, S., & Eigner, K. (Eds.). (2009). Scientific understanding: Philosophical perspectives. Pittsburgh: University of Pittsburgh Press. de Regt, H. W., Leonelli, S., & Eigner, K. (Eds.). (2009). Scientific understanding: Philosophical perspectives. Pittsburgh: University of Pittsburgh Press.
go back to reference Ehsan, U., Harrison, B., Chan, L., & Riedl, M. O. (2018). Rationalization: A neural machine translation approach to generating natural language explanations. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 81–87). New York: ACM. Ehsan, U., Harrison, B., Chan, L., & Riedl, M. O. (2018). Rationalization: A neural machine translation approach to generating natural language explanations. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 81–87). New York: ACM.
go back to reference Elgin, C. Z. (2007). Understanding and the facts. Philosophical Studies, 132, 33–42.CrossRef Elgin, C. Z. (2007). Understanding and the facts. Philosophical Studies, 132, 33–42.CrossRef
go back to reference Elgin, C. Z. (2008). Exemplification, idealization, and scientific understanding. In M. Suárez (Ed.), Fictions in science: Philosophical essays on modelling and idealization (pp. 77–90). London: Routledge. Elgin, C. Z. (2008). Exemplification, idealization, and scientific understanding. In M. Suárez (Ed.), Fictions in science: Philosophical essays on modelling and idealization (pp. 77–90). London: Routledge.
go back to reference Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In C. Castelfranchi, & Tan, Y.-H. (Eds.), Trust and deception in virtual societies (pp. 55–90). Dordrecht: Springer.MATHCrossRef Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. In C. Castelfranchi, & Tan, Y.-H. (Eds.), Trust and deception in virtual societies (pp. 55–90). Dordrecht: Springer.MATHCrossRef
go back to reference Freitas, A. A. (2014). Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter, 15(1), 1–10.CrossRef Freitas, A. A. (2014). Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter, 15(1), 1–10.CrossRef
go back to reference Fürnkranz, J., Kliegr, T., & Paulheim, H. (2018). On cognitive preferences and the plausibility of rule-based models. arXiv preprint arXiv:1803.01316. Fürnkranz, J., Kliegr, T., & Paulheim, H. (2018). On cognitive preferences and the plausibility of rule-based models. arXiv preprint arXiv:​1803.​01316.
go back to reference Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2019). Explaining explanation. An overview of interpretability of machine learning. arXiv preprint arXiv:1806.00069v3. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2019). Explaining explanation. An overview of interpretability of machine learning. arXiv preprint arXiv:​1806.​00069v3.
go back to reference Greco, J. (2010). Achieving knowledge. Cambridge: Cambridge University Press.CrossRef Greco, J. (2010). Achieving knowledge. Cambridge: Cambridge University Press.CrossRef
go back to reference Greco, J. (2012). Intellectual virtues and their place in philosophy. In C. Jäger & W. Löffler (Eds.), Epistemology: Contexts, values, disagreement: Proceedings of the 34th international Wittgenstein symposium (pp. 117–130). Heusenstamm: Ontos. Greco, J. (2012). Intellectual virtues and their place in philosophy. In C. Jäger & W. Löffler (Eds.), Epistemology: Contexts, values, disagreement: Proceedings of the 34th international Wittgenstein symposium (pp. 117–130). Heusenstamm: Ontos.
go back to reference Grimm, S. R. (2006). Is understanding a species of knowledge? British Journal for the Philosophy of Science, 57, 515–535.CrossRef Grimm, S. R. (2006). Is understanding a species of knowledge? British Journal for the Philosophy of Science, 57, 515–535.CrossRef
go back to reference Grimm, S. R. (2011). Understanding. In S. Bernecker & D. Pritchard (Eds.), The Routledge companion to epistemology (pp. 84–94). New York: Routledge. Grimm, S. R. (2011). Understanding. In S. Bernecker & D. Pritchard (Eds.), The Routledge companion to epistemology (pp. 84–94). New York: Routledge.
go back to reference Grimm, S. R. (2014). Understanding as knowledge of causes. In A. Fairweather (Ed.), Virtue epistemology naturalized: Bridges between virtue epistemology and philosophy of science. Dordrecht: Springer. Grimm, S. R. (2014). Understanding as knowledge of causes. In A. Fairweather (Ed.), Virtue epistemology naturalized: Bridges between virtue epistemology and philosophy of science. Dordrecht: Springer.
go back to reference Grimm, S. R. (Ed.). (2018). Making sense of the world: New essays on the philosophy of understanding. New York: Oxford University Press. Grimm, S. R. (Ed.). (2018). Making sense of the world: New essays on the philosophy of understanding. New York: Oxford University Press.
go back to reference Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), Article 93. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), Article 93.
go back to reference Hempel, C. G. (1965). Aspects of scientific explanation. New York: The Free Press. Hempel, C. G. (1965). Aspects of scientific explanation. New York: The Free Press.
go back to reference Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141–154.CrossRef Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., & Baesens, B. (2011). An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems, 51(1), 141–154.CrossRef
go back to reference Kelemen, D. (1999). Functions, goals, and intentions: Children’s teleological reasoning about objects. Trends in Cognitive Science, 12, 461–468.CrossRef Kelemen, D. (1999). Functions, goals, and intentions: Children’s teleological reasoning about objects. Trends in Cognitive Science, 12, 461–468.CrossRef
go back to reference Khalifa, K. (2012). Inaugurating understanding or repackaging explanation. Philosophy of Science, 79, 15–37.CrossRef Khalifa, K. (2012). Inaugurating understanding or repackaging explanation. Philosophy of Science, 79, 15–37.CrossRef
go back to reference Kim, B. (2015). Interactive and interpretable machine learning models for human machine collaboration. Ph.D. thesis, Massachusetts Institute of Technology. Kim, B. (2015). Interactive and interpretable machine learning models for human machine collaboration. Ph.D. thesis, Massachusetts Institute of Technology.
go back to reference Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. arXiv preprint arXiv:1804.02969. Kliegr, T., Bahník, Š., & Fürnkranz, J. (2018). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. arXiv preprint arXiv:​1804.​02969.
go back to reference Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., & Thomaz, A. (2016). Learning from explanations using sentiment and advice in RL. IEEE Transactions on Cognitive and Developmental Systems, 9(1), 44–55.CrossRef Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., & Thomaz, A. (2016). Learning from explanations using sentiment and advice in RL. IEEE Transactions on Cognitive and Developmental Systems, 9(1), 44–55.CrossRef
go back to reference Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. New York: Cambridge University Press.CrossRef Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. New York: Cambridge University Press.CrossRef
go back to reference Kvanvig, J. (2009). Response to critics. In A. Haddock, A. Millar, & D. Pritchard (Eds.), Epistemic value (pp. 339–351). New York: Oxford University Press. Kvanvig, J. (2009). Response to critics. In A. Haddock, A. Millar, & D. Pritchard (Eds.), Epistemic value (pp. 339–351). New York: Oxford University Press.
go back to reference Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., et al. (2019). An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006. Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., et al. (2019). An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:​1902.​00006.
go back to reference Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature Communications, 10(1), 1096.CrossRef Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature Communications, 10(1), 1096.CrossRef
go back to reference Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2017). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31, 611–627.CrossRef Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2017). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31, 611–627.CrossRef
go back to reference Lewis, D. K. (1986). Causal explanation. In D. K. Lewis (Ed.), Philosophical papers (Vol. II, pp. 214–240). New York: Oxford University Press. Lewis, D. K. (1986). Causal explanation. In D. K. Lewis (Ed.), Philosophical papers (Vol. II, pp. 214–240). New York: Oxford University Press.
go back to reference Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives (pp. 43–63). Pittsburgh: University of Pittsburgh Press.CrossRef Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives (pp. 43–63). Pittsburgh: University of Pittsburgh Press.CrossRef
go back to reference Lombrozo, T., & Gwynne, N. Z. (2014). Explanation and inference: Mechanistic and functional explanations guide property generalization. Frontiers in Human Neuroscience, 8, 700.CrossRef Lombrozo, T., & Gwynne, N. Z. (2014). Explanation and inference: Mechanistic and functional explanations guide property generalization. Frontiers in Human Neuroscience, 8, 700.CrossRef
go back to reference Lombrozo, T., & Wilkenfeld, D. A. (forthcoming). Mechanistic vs. functional understanding. In S. R. Grimm (Ed.), Varieties of understanding: New perspectives from philosophy, psychology, and theology. New York: Oxford University Press. Lombrozo, T., & Wilkenfeld, D. A. (forthcoming). Mechanistic vs. functional understanding. In S. R. Grimm (Ed.), Varieties of understanding: New perspectives from philosophy, psychology, and theology. New York: Oxford University Press.
go back to reference McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on recommender systems (pp. 165–172). New York: ACM. McAuley, J., & Leskovec, J. (2013). Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on recommender systems (pp. 165–172). New York: ACM.
go back to reference Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–18.MathSciNetMATHCrossRef Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–18.MathSciNetMATHCrossRef
go back to reference Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279–288). New York: ACM. Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279–288). New York: ACM.
go back to reference Mizrahi, M. (2012). Idealizations and scientific understanding. Philosophical Studies, 160, 237–252.CrossRef Mizrahi, M. (2012). Idealizations and scientific understanding. Philosophical Studies, 160, 237–252.CrossRef
go back to reference Páez, A. (2006). Explanations in K. An analysis of explanation as a belief revision operation. Oberhausen: Athena Verlag. Páez, A. (2006). Explanations in K. An analysis of explanation as a belief revision operation. Oberhausen: Athena Verlag.
go back to reference Pazzani, M. (2000). Knowledge discovery from data? IEEE Intelligent Systems, 15(2), 10–13.CrossRef Pazzani, M. (2000). Knowledge discovery from data? IEEE Intelligent Systems, 15(2), 10–13.CrossRef
go back to reference Piltaver, R., Luštrek, M., Gams, M., & Martinčić-Ipšić, S. (2016). What makes classification trees comprehensible? Expert Systems with Applications: An International Journal, 62(C), 333–346.CrossRef Piltaver, R., Luštrek, M., Gams, M., & Martinčić-Ipšić, S. (2016). What makes classification trees comprehensible? Expert Systems with Applications: An International Journal, 62(C), 333–346.CrossRef
go back to reference Potochnik, A. (2017). Idealization and the aims of science. Chicago: University of Chicago Press.CrossRef Potochnik, A. (2017). Idealization and the aims of science. Chicago: University of Chicago Press.CrossRef
go back to reference Pritchard, D. (2008). Knowing the answer, Understanding and epistemic value. Grazer Philosophische Studien, 77, 325–339.CrossRef Pritchard, D. (2008). Knowing the answer, Understanding and epistemic value. Grazer Philosophische Studien, 77, 325–339.CrossRef
go back to reference Pritchard, D. (2014). Knowledge and understanding. In A. Fairweather (Ed.), Virtue scientia: Bridges between virtue epistemology and philosophy of science (pp. 315–328). Dordrecht: Springer.CrossRef Pritchard, D. (2014). Knowledge and understanding. In A. Fairweather (Ed.), Virtue scientia: Bridges between virtue epistemology and philosophy of science (pp. 315–328). Dordrecht: Springer.CrossRef
go back to reference Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (Eds.). (2009). Dataset shift in machine learning. Cambridge: MIT Press. Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (Eds.). (2009). Dataset shift in machine learning. Cambridge: MIT Press.
go back to reference Reiss, J. (2012). The explanation paradox. Journal of Economic Methodology, 19, 43–62.CrossRef Reiss, J. (2012). The explanation paradox. Journal of Economic Methodology, 19, 43–62.CrossRef
go back to reference Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). New York: ACM. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). New York: ACM.
go back to reference Salmon, W. C. (1971). Statistical explanation. In W. C. Salmon (Ed.), Statistical explanation and statistical relevance. Pittsburgh: Pittsburgh University Press.MATHCrossRef Salmon, W. C. (1971). Statistical explanation. In W. C. Salmon (Ed.), Statistical explanation and statistical relevance. Pittsburgh: Pittsburgh University Press.MATHCrossRef
go back to reference Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton: Princeton University Press. Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton: Princeton University Press.
go back to reference Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:​1708.​08296.
go back to reference Strevens, M. (2013). No understanding without explanation. Studies in the History and Philosophy of Science, 44, 510–515.CrossRef Strevens, M. (2013). No understanding without explanation. Studies in the History and Philosophy of Science, 44, 510–515.CrossRef
go back to reference Wilkenfeld, D. (2013). Understanding as representation manipulability. Synthese, 190, 997–1016.CrossRef Wilkenfeld, D. (2013). Understanding as representation manipulability. Synthese, 190, 997–1016.CrossRef
go back to reference Woodward, J. (2003). Making things happen. A theory of causal explanation. New York: Oxford University Press. Woodward, J. (2003). Making things happen. A theory of causal explanation. New York: Oxford University Press.
go back to reference Zagzebski, L. (2001). Recovering understanding. In M. Steup (Ed.), Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue. New York: Oxford University Press. Zagzebski, L. (2001). Recovering understanding. In M. Steup (Ed.), Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue. New York: Oxford University Press.
go back to reference Zagzebski, L. (2009). On epistemology. Belmont: Wadsworth. Zagzebski, L. (2009). On epistemology. Belmont: Wadsworth.
go back to reference Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In 13th European conference on computer vision ECCV 2014 (pp. 818–833). Cham: Springer. Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In 13th European conference on computer vision ECCV 2014 (pp. 818–833). Cham: Springer.
Metadata
Title
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Author
Andrés Páez
Publication date
29-05-2019
Publisher
Springer Netherlands
Published in
Minds and Machines / Issue 3/2019
Print ISSN: 0924-6495
Electronic ISSN: 1572-8641
DOI
https://doi.org/10.1007/s11023-019-09502-w

Other articles of this Issue 3/2019

Minds and Machines 3/2019 Go to the issue

Premium Partner