Skip to main content

2019 | OriginalPaper | Buchkapitel

Assessing Heuristic Machine Learning Explanations with Model Counting

verfasst von : Nina Narodytska, Aditya Shrotri, Kuldeep S. Meel, Alexey Ignatiev, Joao Marques-Silva

Erschienen in: Theory and Applications of Satisfiability Testing – SAT 2019

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Machine Learning (ML) models are widely used in decision making procedures in finance, medicine, education, etc. In these areas, ML outcomes can directly affect humans, e.g. by deciding whether a person should get a loan or be released from prison. Therefore, we cannot blindly rely on black box ML models and need to explain the decisions made by them. This motivated the development of a variety of ML-explainer systems, including LIME and its successor \({\textsc {Anchor}}\). Due to the heuristic nature of explanations produced by existing tools, it is necessary to validate them. We propose a SAT-based method to assess the quality of explanations produced by \({\textsc {Anchor}}\). We encode a trained ML model and an explanation for a given prediction as a propositional formula. Then, by using a state-of-the-art approximate model counter, we estimate the quality of the provided explanation as the number of solutions supporting it.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
In the training phase, there is an additional hard tanh layer after batch normalization but it is redundant in the inference phase.
 
Literatur
1.
Zurück zum Zitat Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I.J., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: NeurIPS, pp. 9525–9536 (2018) Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I.J., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: NeurIPS, pp. 9525–9536 (2018)
2.
Zurück zum Zitat Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. In: NeurIPS, pp. 7786–7795 (2018) Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. In: NeurIPS, pp. 7786–7795 (2018)
3.
Zurück zum Zitat Shih, A., Darwiche, A., Choi, A.: Verifying binarized neural networks by local automaton learning. In: VNN (2019) Shih, A., Darwiche, A., Choi, A.: Verifying binarized neural networks by local automaton learning. In: VNN (2019)
4.
Zurück zum Zitat Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 185. IOS Press (2009) Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 185. IOS Press (2009)
5.
Zurück zum Zitat Carter, J.L., Wegman, M.N.: Universal classes of hash functions. In: Proceedings of STOC, pp. 106–112. ACM (1977) Carter, J.L., Wegman, M.N.: Universal classes of hash functions. In: Proceedings of STOC, pp. 106–112. ACM (1977)
6.
Zurück zum Zitat Chakraborty, S., Meel, K.S., Vardi, M.Y.: A scalable approximate model counter. In: Proceedings of CP, pp. 200–216 (2013) Chakraborty, S., Meel, K.S., Vardi, M.Y.: A scalable approximate model counter. In: Proceedings of CP, pp. 200–216 (2013)
7.
Zurück zum Zitat Chakraborty, S., Meel, K.S., Vardi, M.Y.: Improving approximate counting for probabilistic inference: from linear to logarithmic sat solver calls. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), July 2016 Chakraborty, S., Meel, K.S., Vardi, M.Y.: Improving approximate counting for probabilistic inference: from linear to logarithmic sat solver calls. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), July 2016
8.
Zurück zum Zitat Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: KDD, pp. 785–794. ACM (2016) Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: KDD, pp. 785–794. ACM (2016)
9.
Zurück zum Zitat Dagum, P., Karp, R., Luby, M., Ross, S.: An optimal algorithm for Monte Carlo estimation. SIAM J. Comput. 29(5), 1484–1496 (2000)MathSciNetCrossRef Dagum, P., Karp, R., Luby, M., Ross, S.: An optimal algorithm for Monte Carlo estimation. SIAM J. Comput. 29(5), 1484–1496 (2000)MathSciNetCrossRef
12.
Zurück zum Zitat Ermon, S., Gomes, C.P., Sabharwal, A., Selman, B.: Taming the curse of dimensionality: discrete integration by hashing and optimization. In: Proceedings of ICML, pp. 334–342 (2013) Ermon, S., Gomes, C.P., Sabharwal, A., Selman, B.: Taming the curse of dimensionality: discrete integration by hashing and optimization. In: Proceedings of ICML, pp. 334–342 (2013)
13.
Zurück zum Zitat Frosst, N., Hinton, G.E.: Distilling a neural network into a soft decision tree. In: Besold, T.R., Kutz, O. (eds.) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017), Bari, Italy, 16–17 November 2017. CEUR Workshop Proceedings, vol. 2071. CEUR-WS.org (2017) Frosst, N., Hinton, G.E.: Distilling a neural network into a soft decision tree. In: Besold, T.R., Kutz, O. (eds.) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017), Bari, Italy, 16–17 November 2017. CEUR Workshop Proceedings, vol. 2071. CEUR-WS.org (2017)
14.
Zurück zum Zitat Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: NIPS, pp. 4107–4115 (2016) Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. In: NIPS, pp. 4107–4115 (2016)
15.
Zurück zum Zitat Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: AAAI (2019) Ignatiev, A., Narodytska, N., Marques-Silva, J.: Abduction-based explanations for machine learning models. In: AAAI (2019)
17.
Zurück zum Zitat Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: KDD, pp. 202–207 (1996) Kohavi, R.: Scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. In: KDD, pp. 202–207 (1996)
18.
Zurück zum Zitat Lagniez, J.M., Marquis, P.: An improved decision-DNNF compiler. In: IJCAI, pp. 667–673 (2017) Lagniez, J.M., Marquis, P.: An improved decision-DNNF compiler. In: IJCAI, pp. 667–673 (2017)
20.
Zurück zum Zitat Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: AAAI, pp. 3530–3537 (2018) Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: AAAI, pp. 3530–3537 (2018)
21.
Zurück zum Zitat Montavon, G., Samek, W., Müller, K.: Methods for interpreting and understanding deep neural networks. Digital Sig. Process. 73, 1–15 (2018)MathSciNetCrossRef Montavon, G., Samek, W., Müller, K.: Methods for interpreting and understanding deep neural networks. Digital Sig. Process. 73, 1–15 (2018)MathSciNetCrossRef
22.
Zurück zum Zitat Muise, C., McIlraith, S.A., Beck, J.C., Hsu, E.: DSHARP: Fast d-DNNF Compilation with sharpSAT. In: Canadian Conference on Artificial Intelligence (2012) Muise, C., McIlraith, S.A., Beck, J.C., Hsu, E.: DSHARP: Fast d-DNNF Compilation with sharpSAT. In: Canadian Conference on Artificial Intelligence (2012)
23.
Zurück zum Zitat Narodytska, N., Kasiviswanathan, S.P., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying properties of binarized deep neural networks. In: AAAI, pp. 6615–6624 (2018) Narodytska, N., Kasiviswanathan, S.P., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying properties of binarized deep neural networks. In: AAAI, pp. 6615–6624 (2018)
24.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: explaining the predictions of any classifier. In: KDD, pp. 1135–1144 (2016)
25.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI (2018) Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI (2018)
26.
Zurück zum Zitat Ross, A.S., Doshi-Velez, F.: Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: AAAI, pp. 1660–1669 (2018) Ross, A.S., Doshi-Velez, F.: Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: AAAI, pp. 1660–1669 (2018)
27.
Zurück zum Zitat Ross, A.S., Hughes, M.C., Doshi-Velez, F.: Right for the right reasons: training differentiable models by constraining their explanations. In: IJCAI, pp. 2662–2670 (2017) Ross, A.S., Hughes, M.C., Doshi-Velez, F.: Right for the right reasons: training differentiable models by constraining their explanations. In: IJCAI, pp. 2662–2670 (2017)
28.
Zurück zum Zitat Sang, T., Beame, P., Kautz, H.: Performing Bayesian inference by weighted model counting. In: Proceedings of AAAI, pp. 475–481 (2005) Sang, T., Beame, P., Kautz, H.: Performing Bayesian inference by weighted model counting. In: Proceedings of AAAI, pp. 475–481 (2005)
30.
Zurück zum Zitat Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: IJCAI, pp. 5103–5111 (2018) Shih, A., Choi, A., Darwiche, A.: A symbolic approach to explaining Bayesian network classifiers. In: IJCAI, pp. 5103–5111 (2018)
32.
Zurück zum Zitat Sinz, C.: Towards an optimal CNF encoding of Boolean cardinality constraints. In: CP, pp. 827–831 (2005) Sinz, C.: Towards an optimal CNF encoding of Boolean cardinality constraints. In: CP, pp. 827–831 (2005)
33.
Zurück zum Zitat Soos, M., Meel, K.S.: Bird: Engineering an efficient CNF-XOR SAT solver and its applications to approximate model counting. In: Proceedings of AAAI Conference on Artificial Intelligence (AAAI), Jan 2019 Soos, M., Meel, K.S.: Bird: Engineering an efficient CNF-XOR SAT solver and its applications to approximate model counting. In: Proceedings of AAAI Conference on Artificial Intelligence (AAAI), Jan 2019
35.
Zurück zum Zitat Thurley, M.: SharpSAT: counting models with advanced component caching and implicit BCP. In: Proceedings of SAT, pp. 424–429 (2006) Thurley, M.: SharpSAT: counting models with advanced component caching and implicit BCP. In: Proceedings of SAT, pp. 424–429 (2006)
36.
37.
38.
Zurück zum Zitat Wu, M., Hughes, M.C., Parbhoo, S., Zazzi, M., Roth, V., Doshi-Velez, F.: Beyond sparsity: tree regularization of deep models for interpretability. In: AAAI, pp. 1670–1678 (2018) Wu, M., Hughes, M.C., Parbhoo, S., Zazzi, M., Roth, V., Doshi-Velez, F.: Beyond sparsity: tree regularization of deep models for interpretability. In: AAAI, pp. 1670–1678 (2018)
Metadaten
Titel
Assessing Heuristic Machine Learning Explanations with Model Counting
verfasst von
Nina Narodytska
Aditya Shrotri
Kuldeep S. Meel
Alexey Ignatiev
Joao Marques-Silva
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-24258-9_19

Premium Partner