Skip to main content
Top

2019 | OriginalPaper | Chapter

“Why Did You Do That?”

Explaining Black Box Models with Inductive Synthesis

Authors : Görkem Paçacı, David Johnson, Steve McKeever, Andreas Hamfelt

Published in: Computational Science – ICCS 2019

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

By their nature, the composition of black box models is opaque. This makes the ability to generate explanations for the response to stimuli challenging. The importance of explaining black box models has become increasingly important given the prevalence of AI and ML systems and the need to build legal and regulatory frameworks around them. Such explanations can also increase trust in these uncertain systems. In our paper we present RICE, a method for generating explanations of the behaviour of black box models by (1) probing a model to extract model output examples using sensitivity analysis; (2) applying CNPInduce, a method for inductive logic program synthesis, to generate logic programs based on critical input-output pairs; and (3) interpreting the target program as a human-readable explanation. We demonstrate the application of our method by generating explanations of an artificial neural network trained to follow simple traffic rules in a hypothetical self-driving car simulation. We conclude with a discussion on the scalability and usability of our approach and its potential applications to explanation-critical scenarios.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Behrendt, K., Novak, L.: A deep learning approach to traffic lights: detection, tracking, and classification. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE (2017) Behrendt, K., Novak, L.: A deep learning approach to traffic lights: detection, tracking, and classification. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE (2017)
2.
go back to reference Beizer, B.: Black-Box Testing: Techniques for Functional Testing of Software and Systems. Wiley, New York (1995) Beizer, B.: Black-Box Testing: Techniques for Functional Testing of Software and Systems. Wiley, New York (1995)
3.
go back to reference Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), CHI 2018, pp. 377:1–377:14. ACM (2018) Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), CHI 2018, pp. 377:1–377:14. ACM (2018)
4.
go back to reference Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI 2017 Workshop on Explainable AI (XAI), vol. 8 (2017) Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI 2017 Workshop on Explainable AI (XAI), vol. 8 (2017)
5.
go back to reference Chu, C.-T., et al.: Map-reduce for machine learning on multicore. In: Advances in Neural Information Processing Systems, pp. 281–288 (2007) Chu, C.-T., et al.: Map-reduce for machine learning on multicore. In: Advances in Neural Information Processing Systems, pp. 281–288 (2007)
6.
go back to reference Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617, May 2016 Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 598–617, May 2016
7.
go back to reference Freitas, A.A.: Comprehensible classification models - a position paper. ACM SIGKDD Explor. 15(1), 1–10 (2013)CrossRef Freitas, A.A.: Comprehensible classification models - a position paper. ACM SIGKDD Explor. 15(1), 1–10 (2013)CrossRef
8.
go back to reference Fuchs, N.E., Schwitter, R.: Attempto controlled English (ACE). arXiv preprint cmp-lg/9603003 (1996) Fuchs, N.E., Schwitter, R.: Attempto controlled English (ACE). arXiv preprint cmp-lg/9603003 (1996)
9.
go back to reference Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452–459 (2015)CrossRef Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452–459 (2015)CrossRef
10.
go back to reference Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2018)CrossRef Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2018)CrossRef
11.
go back to reference Gunning, D.: Explainable artificial intelligence (XAI), Program update November 2017 (2017) Gunning, D.: Explainable artificial intelligence (XAI), Program update November 2017 (2017)
12.
go back to reference Hamfelt, A., Nilsson, J.F.: Inductive metalogic programming. In: Proceedings of the Fourth International Workshop on Inductive Logic programming. Bad Honnef/Bonn GMD-Studien Nr. 237, pp. 85–96 (1994) Hamfelt, A., Nilsson, J.F.: Inductive metalogic programming. In: Proceedings of the Fourth International Workshop on Inductive Logic programming. Bad Honnef/Bonn GMD-Studien Nr. 237, pp. 85–96 (1994)
14.
go back to reference Hamfelt, A., Nilsson, J.F., Vitoria, A.: A combinatory form of pure logic programs and its compositional semantics (1998, Unpublished draft) Hamfelt, A., Nilsson, J.F., Vitoria, A.: A combinatory form of pure logic programs and its compositional semantics (1998, Unpublished draft)
15.
go back to reference Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018)CrossRef Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018)CrossRef
16.
go back to reference Lloyd, J.R., Duvenaud, D., Grosse, R., Tenenbaum, J.B., Ghahramani, Z.: Automatic construction and natural-language description of nonparametric regression models. In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2014, pp. 1242–1250. AAAI Press (2014) Lloyd, J.R., Duvenaud, D., Grosse, R., Tenenbaum, J.B., Ghahramani, Z.: Automatic construction and natural-language description of nonparametric regression models. In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2014, pp. 1242–1250. AAAI Press (2014)
17.
go back to reference Paçaci, G.: Representation of Compositional Relational Programs. Ph.D. thesis, Uppsala University, Information Systems (2017) Paçaci, G.: Representation of Compositional Relational Programs. Ph.D. thesis, Uppsala University, Information Systems (2017)
18.
go back to reference Paçaci, G., McKeever, S., Hamfelt, A.: Compositional relational programming with nominal projection and compositional synthesis. In: Proceedings of the PSI 2017: 11th Ershov Informatics Conference (2017) Paçaci, G., McKeever, S., Hamfelt, A.: Compositional relational programming with nominal projection and compositional synthesis. In: Proceedings of the PSI 2017: 11th Ershov Informatics Conference (2017)
19.
go back to reference Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.M.: Manipulating and measuring model interpretability. CoRR abs/1802.07810 (2018) Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.M.: Manipulating and measuring model interpretability. CoRR abs/1802.07810 (2018)
20.
go back to reference Prakken, H., Sartor, G.: Law and logic: a review from an argumentation perspective. Artif. Intell. 227, 214–245 (2015)MathSciNetCrossRef Prakken, H., Sartor, G.: Law and logic: a review from an argumentation perspective. Artif. Intell. 227, 214–245 (2015)MathSciNetCrossRef
21.
go back to reference Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York, NY, USA, 2016), KDD 2016, pp. 1135–1144. ACM (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York, NY, USA, 2016), KDD 2016, pp. 1135–1144. ACM (2016)
22.
go back to reference Rudin, C.: Please stop explaining black box models for high stakes decisions. CoRR abs/1811.10154 (2018) Rudin, C.: Please stop explaining black box models for high stakes decisions. CoRR abs/1811.10154 (2018)
23.
go back to reference ten Broeke, G., van Voorn, G., Ligtenberg, A.: Which sensitivity analysis method should i use for my agent-based model? J. Artif. Soc. Soc. Simul. 19(1), 5 (2016)CrossRef ten Broeke, G., van Voorn, G., Ligtenberg, A.: Which sensitivity analysis method should i use for my agent-based model? J. Artif. Soc. Soc. Simul. 19(1), 5 (2016)CrossRef
24.
go back to reference Zurada, J.M., Malinowski, A., Cloete, I.: Sensitivity analysis for minimization of input data dimension for feedforward neural network. In: 1994 IEEE International Symposium on Circuits and Systems, ISCAS 1994, vol. 6, pp. 447–450. IEEE (1994) Zurada, J.M., Malinowski, A., Cloete, I.: Sensitivity analysis for minimization of input data dimension for feedforward neural network. In: 1994 IEEE International Symposium on Circuits and Systems, ISCAS 1994, vol. 6, pp. 447–450. IEEE (1994)
Metadata
Title
“Why Did You Do That?”
Authors
Görkem Paçacı
David Johnson
Steve McKeever
Andreas Hamfelt
Copyright Year
2019
DOI
https://doi.org/10.1007/978-3-030-22750-0_27

Premium Partner