Skip to main content
Top
Published in: Minds and Machines 1/2022

16-10-2021 | Original Article

The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation

Authors: Sanja Srećković, Andrea Berber, Nenad Filipović

Published in: Minds and Machines | Issue 1/2022

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Certain characteristics make machine learning (ML) a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what it is specifically that makes ML methodology substantially unsuitable for reaching explanations. The analysis is given broader philosophical context by connecting it with the views on the role of prediction and explanation in science, their relationship, and the value of explanation. We proceed to show, first, that ML disrupts the relationship between prediction and explanation commonly understood as a functional relationship. Then we show that the value of explanation is not exhausted in purely predictive functions, but rather has a ubiquitously recognized value for both science and everyday life. We then invoke two hypothetical scenarios with different degrees of automatization of science, which help test our intuitions on the role of explanation in science. The main question we address is whether ML will reorient or otherwise impact our standard explanatory practice. We conclude with a prognosis that ML would diversify science into purely predictively oriented research based on ML-like techniques and, on the other hand, remaining faithful to anthropocentric research focused on the search for explanation.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
When discussing ML, we will take ANNs as the paradigmatic, and most popular type of model which implements machine learning, although our conclusions can, if needed, be (more or less successfully) transferred to other similar methods.
 
2
A similar attitude is reflected even in the everyday context. There is empirical evidence suggesting that there are consistent expectations of citing causes in everyday explanations (Miller, 2017).
 
3
The xAI literature is not always clear on whether the explanandum is the whole process or only its outcome. Since not understanding the process leads to not understanding the outcome, i.e., the decision, they are tightly connected and thus often addressed jointly. However, depending on the context, the ML process can be considered either as an explanandum, or it could be cited in an explanans of a particular outcome. Such considerations are irrelevant for our present purposes. Here we will refer only to the ML process as an explanandum which is lacking an explanans.
 
4
Of course, experts in the fields who participate in the design or debugging of ML models understand the general mechanism through which these models work, and in that sense, to them these models are more appropriately described as ‘grey’ rather than black boxes. However, there are aspects of the functioning of ML models that even the experts in the field do not have complete access to, and these aspects concern the paths through which the model proceeds from input to output. For example, even simple devices like calculators can be non-transparent to the end-users in the sense that they do not know how the device is made nor can they mentally follow a mathematical operation because it is too complicated. However, when we get a certain output from an input using a calculator, every step leading from that particular input to that particular output is known and can be explicated. In the case of ML, the path leading from the input to the output is not available. This is based on the fact that the internal logic of the model is altered as it ‘learns’ from the data, and it is what separates ML from other technological tools (Burrell, 2016).
 
5
By complexity we refer to a general notion of complexity of a method (which is based on a great number of layers, operations, and data involved in the ML processes), not the technical concept of computational complexity which could be measured in various ways (e.g. Kolmogorov complexity).
 
6
It is intractable to test all paths in software systems, which contain more than about 108 paths (which is about 300 lines of code), and it should be noted that most software systems used in science are much larger than that (Symons and Horner, 2014).
 
7
The development of various xAI tools should, in principle, alleviate the model’s opacity to a degree, since it would hopefully lead to more transparent processes, and full or partial detection of correlations (Zednik, 2021). This, of course, would be helpful only providing that the calculations of the xAI techniques are based on the same parameters as the ML models, which they are meant to explain, and not only on the input–output trends as pointed out by some more pessimistic views (Rudin, 2019). Since the development of xAI tools is a fast-growing area because of the numerous ethical considerations (see Sect. 2), such explanatory tools could be a thing of the not-too-distant future. The potential solution which the xAI techniques could provide is, unfortunately, somewhat limited, since it addresses only the obstacles posed by the lack of epistemic access. However, it could be the first step in extracting knowledge on causal relations from the ML model. To address other obstacles such as the mismatch between the explanatory and predictive modeling, other paths and techniques would need to be devised.
 
8
The massive datasets ML systems learn from are often referred to by the term ‘big data’, meaning “rapidly collected, complex data in such unprecedented quantities that terabytes (1012 bytes), petabytes (1015 bytes) or even zettabytes (1021 bytes) of storage may be required” (Wyber et al., 2015; Leonelli, 2020).
 
9
On the other hand, Trout argues that the close connection between the explanations and understanding should not be taken for granted. He argues that since a sense of understanding could be the result of various cognitive biases, we sometimes become overconfident in our explanations as a result of the biases that lead to understanding (Trout, 2002). By such a view, we should not consider that explanation and understanding are so closely tied. We understand Trout's suggestion as a warning that understanding should not be the only measure of the value of explanation because our sense of understanding can be misleading. Thus, if we relied only on understanding as the source of the value of an explanation, many misinterpretations would seem more plausible than some other correct but, say, more complex explanations. However, even if a sense of understanding should not be the only measure of the value of an explanation, as suggested by Trout, understanding that an explanation brings, nevertheless, makes a significant contribution to its value.
 
10
Discussing intrinsic value needs more caution than seems present in the literature. Some of the views assign intrinsic value to explanations and proceed to describe it in terms of some other value such as understanding (e.g. Hempel 1962; Strevens 2008; Lombrozo 2011). We do not consider these as proper intrinsic-value views. For explanation to qualify as intrinsically valuable, the value needs to be assigned to explanation per se, regardless of any relations it bears to other related concepts. Other views, which take explanations as bona fide intrinsically valuable, consider them as rewarding in themselves, and “may be sought out as such, even by the youngest of children, with no further agenda” (Keil, 2006, p. 234; See also Gopnik, 1998). If such views are correct, then it seems unlikely that the value of explanation could be impacted by some non-explanatory tool such as ML, since its value is not dependent on its relations to anything else. However, the already existing discussion on the impact of predictive methods such as ML on the explanatory practice in science, as well as the major historical discussions on the value of explanation and its relationship with prediction seem to be based on the belief that such views are not quite convincing, and we will proceed the inquiry in the directions implicated by these discussions.
 
11
We thank the anonymous reviewer for pointing out this possibility to us.
 
12
Admittedly, we needed to tweak the specifications of the Laplace’s demon to make a proper analogy with ML: the fact that at least some parts of his internal processes are available to us, and that he does not conclude by knowing the laws of nature (as in the original), but by capturing correlations among the data. We chose to do this in contrasting the Laplacean demon to a clairvoyant in order to focus primarily on what we found important for the epistemological analysis of the justification of ML predictions. Namely, the exact ways through which the predictions are reached—the fact that this method is designed by scientists according to a sound methodology, rather than through some kind of ‘magic’ or clairvoyance—emphasizes an aspect of ML which is important for justifying the method itself.
 
Literature
go back to reference Berber, A., & Sreckovic, S. (2021). Inherent ethical problems of machine learning. Unpublished manuscript, Faculty of Philosophy, Belgrade University, Belgrade, Serbia. Berber, A., & Sreckovic, S. (2021). Inherent ethical problems of machine learning. Unpublished manuscript, Faculty of Philosophy, Belgrade University, Belgrade, Serbia.
go back to reference Bird, A. (2011). Philosophy of Science and Epistemology. In S. French & J. Saatsi (Eds.), Continuum Companion to the Philosophy of Science (pp. 15–32). London: Continuum. Bird, A. (2011). Philosophy of Science and Epistemology. In S. French & J. Saatsi (Eds.), Continuum Companion to the Philosophy of Science (pp. 15–32). London: Continuum.
go back to reference Boge, F. J., & Grünke, P. (2019). Computer simulations, machine learning and the laplacean demon: Opacity in the case of high energy physics. In Kaminski, Resch, & Gehring (Eds.), The Science and Art of Simulation II, Springer. Boge, F. J., & Grünke, P. (2019). Computer simulations, machine learning and the laplacean demon: Opacity in the case of high energy physics. In Kaminski, Resch, & Gehring (Eds.), The Science and Art of Simulation II, Springer.
go back to reference Boon, M. (2020). How Scientists Are Brought Back into Science—The Error of Empiricism. In A Critical Reflection on Automated Science, Marta Bertolaso and Fabio Sterpetti (eds.), (Human Perspectives in Health Sciences and Technology 1) (pp. 43–65). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-25001-0_4 Boon, M. (2020). How Scientists Are Brought Back into Science—The Error of Empiricism. In A Critical Reflection on Automated Science, Marta Bertolaso and Fabio Sterpetti (eds.), (Human Perspectives in Health Sciences and Technology 1) (pp. 43–65). Cham: Springer International Publishing. https://​doi.​org/​10.​1007/​978-3-030-25001-0_​4
go back to reference Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3(1), 1–12.CrossRef Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3(1), 1–12.CrossRef
go back to reference Calude, C. S., & Longo, G. (2017). The deluge of spurious correlations in big data. Foundations of Science, 22(3), 595–612.MathSciNetCrossRef Calude, C. S., & Longo, G. (2017). The deluge of spurious correlations in big data. Foundations of Science, 22(3), 595–612.MathSciNetCrossRef
go back to reference Chouinard, M. M., Harris, P. L., & Maratsos, M. P. (2007). Children’s questions: A mechanism for cognitive development. Monographs of the Society for Research in Child Development, 72(1), 1–129.CrossRef Chouinard, M. M., Harris, P. L., & Maratsos, M. P. (2007). Children’s questions: A mechanism for cognitive development. Monographs of the Society for Research in Child Development, 72(1), 1–129.CrossRef
go back to reference De Regt, H. W., & Dieks, D. (2005). A Contextual approach to scientific understanding. Synthese, 144, 137–170.CrossRef De Regt, H. W., & Dieks, D. (2005). A Contextual approach to scientific understanding. Synthese, 144, 137–170.CrossRef
go back to reference Gilpin, L. H., Bau, D., Yuan, B. Z., et al. (2018). Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (pp.80–89). Turin, New York: IEEE. Gilpin, L. H., Bau, D., Yuan, B. Z., et al. (2018). Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) (pp.80–89). Turin, New York: IEEE.
go back to reference Gopnik, A., & Meltzoff, A. N. (1996). Words, thoughts and theories. Bradford, MIT Press. Gopnik, A., & Meltzoff, A. N. (1996). Words, thoughts and theories. Bradford, MIT Press.
go back to reference Hanson, N. R. (1959). On the symmetry between explanation and prediction. Philosophical Review, 68, 349–358.CrossRef Hanson, N. R. (1959). On the symmetry between explanation and prediction. Philosophical Review, 68, 349–358.CrossRef
go back to reference Hempel, C. (1962). Explanation in science and in history. In R. G. Colodny (Ed.), Frontiers of Science and Philosophy (pp. 7–33). Pittsburgh, PA: University of Pittsburgh Press, 7–33. Hempel, C. (1962). Explanation in science and in history. In R. G. Colodny (Ed.), Frontiers of Science and Philosophy (pp. 7–33). Pittsburgh, PA: University of Pittsburgh Press, 7–33.
go back to reference Hempel, C. G., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15, 135–175.CrossRef Hempel, C. G., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15, 135–175.CrossRef
go back to reference Hickling, A. K., & Wellman, H. M. (2001). The emergence of children’s causal explanations and theories: evidence from everyday conversation. Developmental Psychology, 37(5), 668.CrossRef Hickling, A. K., & Wellman, H. M. (2001). The emergence of children’s causal explanations and theories: evidence from everyday conversation. Developmental Psychology, 37(5), 668.CrossRef
go back to reference Hofstadter, A. (1951). Explanation and necessity. Philosophy and Phenomenological Research, 11, 339–347.CrossRef Hofstadter, A. (1951). Explanation and necessity. Philosophy and Phenomenological Research, 11, 339–347.CrossRef
go back to reference Laplace, P. S. (1814). Philosophical Essay of Probabilities, translated by Andrew Dale. (1999) New York: Springer. Laplace, P. S. (1814). Philosophical Essay of Probabilities, translated by Andrew Dale. (1999) New York: Springer.
go back to reference Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives (pp. 43–63). University of Pittsburgh Press.CrossRef Lipton, P. (2009). Understanding without explanation. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific understanding: Philosophical perspectives (pp. 43–63). University of Pittsburgh Press.CrossRef
go back to reference Liquin, E., & Lombrozo, T. (2018). Determinants and Consequences of the Need for Explanation. In T. T. Rogers, M. Rau, X. Zhu, & C. W. Kalish (Ed.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 696–701). Austin, TX: Cognitive Science Society. Liquin, E., & Lombrozo, T. (2018). Determinants and Consequences of the Need for Explanation. In T. T. Rogers, M. Rau, X. Zhu, & C. W. Kalish (Ed.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (pp. 696–701). Austin, TX: Cognitive Science Society.
go back to reference Lombrozo, T. (2011). The instrumental value of explanations. Philosophy Compass, 6(8), 539551.CrossRef Lombrozo, T. (2011). The instrumental value of explanations. Philosophy Compass, 6(8), 539551.CrossRef
go back to reference Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRef Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRef
go back to reference Miller, T. (2017). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.MathSciNetCrossRef Miller, T. (2017). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.MathSciNetCrossRef
go back to reference Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. In FAT 2019: Conference on Fairness, Accountability, and Transparency. Atlanta, GA. Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining Explanations in AI. In FAT 2019: Conference on Fairness, Accountability, and Transparency. Atlanta, GA.
go back to reference Nagel, E. (1961). The structure of science: Problems in the logic of scientific explanation. Harcourt, Brace & World.CrossRef Nagel, E. (1961). The structure of science: Problems in the logic of scientific explanation. Harcourt, Brace & World.CrossRef
go back to reference Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds & Machines, 29, 441–459.CrossRef Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds & Machines, 29, 441–459.CrossRef
go back to reference Quine, W. V. O., & Ullian, J. S. (1978). The web of belief. McGraw-Hill. Quine, W. V. O., & Ullian, J. S. (1978). The web of belief. McGraw-Hill.
go back to reference Reichenbach, H. (1938). Experience and Prediction. University of Chicago Press.MATH Reichenbach, H. (1938). Experience and Prediction. University of Chicago Press.MATH
go back to reference Resch, M., & Kaminski, A. (2019). The epistemic importance of technology in computer simulation and machine learning. Minds and Machines, 29(1), 9–17.CrossRef Resch, M., & Kaminski, A. (2019). The epistemic importance of technology in computer simulation and machine learning. Minds and Machines, 29(1), 9–17.CrossRef
go back to reference Ribera, M., & Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable AI. Presented at Explainable Smart Systems Conference 2019, Los Angeles. Ribera, M., & Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable AI. Presented at Explainable Smart Systems Conference 2019, Los Angeles.
go back to reference Salmon, W. (1978). Why ask, ‘Why?’? An inquiry concerning scientific explanation. Proceedings and Addresses of the American Philosophical Association, 51, 683–705.CrossRef Salmon, W. (1978). Why ask, ‘Why?’? An inquiry concerning scientific explanation. Proceedings and Addresses of the American Philosophical Association, 51, 683–705.CrossRef
go back to reference Salmon, W. (1999). The spirit of logical empiricism: Carl g. Hempel’s role in twentieth-century philosophy of science. Philosophy of Science, 66, 333–350.MathSciNetCrossRef Salmon, W. (1999). The spirit of logical empiricism: Carl g. Hempel’s role in twentieth-century philosophy of science. Philosophy of Science, 66, 333–350.MathSciNetCrossRef
go back to reference Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., & Müller, K.-R. (eds.) (2019). Explainable AI: Interpreting, explaining and visualizing deep learning. LNCS, vol. 11700. Springer, Cham. Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., & Müller, K.-R. (eds.) (2019). Explainable AI: Interpreting, explaining and visualizing deep learning. LNCS, vol. 11700. Springer, Cham.
go back to reference Strevens, M. (2008). Depth: An account of scientific explanation. Harvard University Press. Strevens, M. (2008). Depth: An account of scientific explanation. Harvard University Press.
go back to reference Wilson, R. A., & Keil, F. (1998). The shadows and shallows of explanation. Minds and Machines, 8(1), 137–159.CrossRef Wilson, R. A., & Keil, F. (1998). The shadows and shallows of explanation. Minds and Machines, 8(1), 137–159.CrossRef
go back to reference Wyber, R., Vaillancourt, S., Perry, W., Mannava, P., Folaranmi, T., & Celi, L. A. (2015). Big data in global health: improving health in low- and middle-income countries. Bulletin of the World Health Organization, 93, 203–208.CrossRef Wyber, R., Vaillancourt, S., Perry, W., Mannava, P., Folaranmi, T., & Celi, L. A. (2015). Big data in global health: improving health in low- and middle-income countries. Bulletin of the World Health Organization, 93, 203–208.CrossRef
go back to reference Zednik, C. (2021). Explainable AI as a tool for scientific exploration. Presented online on 21.04.2021. Online Seminars on the Foundations and Ethics of AI in Lugano. Zednik, C. (2021). Explainable AI as a tool for scientific exploration. Presented online on 21.04.2021. Online Seminars on the Foundations and Ethics of AI in Lugano.
Metadata
Title
The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation
Authors
Sanja Srećković
Andrea Berber
Nenad Filipović
Publication date
16-10-2021
Publisher
Springer Netherlands
Published in
Minds and Machines / Issue 1/2022
Print ISSN: 0924-6495
Electronic ISSN: 1572-8641
DOI
https://doi.org/10.1007/s11023-021-09575-6

Other articles of this Issue 1/2022

Minds and Machines 1/2022 Go to the issue

SI: Machine Learning: Prediction Without Explanation

Two Dimensions of Opacity and the Deep Learning Predicament

S.I. : Machine learning: Prediction Without Explanation?

Explanations in AI as Claims of Tacit Knowledge

Premium Partner