Skip to main content

2023 | OriginalPaper | Buchkapitel

Explaining Predictions by Characteristic Rules

verfasst von : Amr Alkhatib, Henrik Boström, Michalis Vazirgiannis

Erschienen in: Machine Learning and Knowledge Discovery in Databases

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Characteristic rules have been advocated for their ability to improve interpretability over discriminative rules within the area of rule learning. However, the former type of rule has not yet been used by techniques for explaining predictions. A novel explanation technique, called CEGA (Characteristic Explanatory General Association rules), is proposed, which employs association rule mining to aggregate multiple explanations generated by any standard local explanation technique into a set of characteristic rules. An empirical investigation is presented, in which CEGA is compared to two state-of-the-art methods, Anchors and GLocalX, for producing local and aggregated explanations in the form of discriminative rules. The results suggest that the proposed approach provides a better trade-off between fidelity and complexity compared to the two state-of-the-art approaches; CEGA and Anchors significantly outperform GLocalX with respect to fidelity, while CEGA and GLocalX significantly outperform Anchors with respect to the number of generated rules. The effect of changing the format of the explanations of CEGA to discriminative rules and using LIME and SHAP as local explanation techniques instead of Anchors are also investigated. The results show that the characteristic explanatory rules still compete favorably with rules in the standard discriminative format. The results also indicate that using CEGA in combination with either SHAP or Anchors consistently leads to a higher fidelity compared to using LIME as the local explanation technique.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
All the datasets were obtained from https://​www.​openml.​org except Adult, German credit, and Compas.
 
Literatur
1.
Zurück zum Zitat Ribeiro, M., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1135–1144 (2016) Ribeiro, M., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1135–1144 (2016)
2.
Zurück zum Zitat Lundberg, S., Lee, S.: A unified approach to interpreting model predictions. Adv. Neural. Inf. Process. Syst. 30, 4765–4774 (2017) Lundberg, S., Lee, S.: A unified approach to interpreting model predictions. Adv. Neural. Inf. Process. Syst. 30, 4765–4774 (2017)
3.
Zurück zum Zitat Ribeiro, M., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (AAAI) (2018) Ribeiro, M., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (AAAI) (2018)
4.
Zurück zum Zitat Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. In: Proceedings of the 20th International Conference on Very Large Data Bases, pp. 487–499 (1994) Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. In: Proceedings of the 20th International Conference on Very Large Data Bases, pp. 487–499 (1994)
5.
Zurück zum Zitat Kohavi, R., Becker, B., Sommerfield, D.: Improving simple Bayes. In: European Conference On Machine Learning (1997) Kohavi, R., Becker, B., Sommerfield, D.: Improving simple Bayes. In: European Conference On Machine Learning (1997)
6.
Zurück zum Zitat Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23 (2021) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23 (2021)
7.
Zurück zum Zitat Molnar, C.: Interpretable machine learning: a guide for making black box models explainable (2019) Molnar, C.: Interpretable machine learning: a guide for making black box models explainable (2019)
8.
Zurück zum Zitat Delaunay, J., Galárraga, L., Largouët, C.: Improving anchor-based explanations. In: CIKM 2020–29th ACM International Conference on Information and Knowledge Management, pp. 3269–3272, October 2020 Delaunay, J., Galárraga, L., Largouët, C.: Improving anchor-based explanations. In: CIKM 2020–29th ACM International Conference on Information and Knowledge Management, pp. 3269–3272, October 2020
9.
Zurück zum Zitat Natesan Ramamurthy, K., Vinzamuri, B., Zhang, Y., Dhurandhar, A.: Model agnostic multilevel explanations. Adv. Neural. Inf. Process. Syst. 33, 5968–5979 (2020) Natesan Ramamurthy, K., Vinzamuri, B., Zhang, Y., Dhurandhar, A.: Model agnostic multilevel explanations. Adv. Neural. Inf. Process. Syst. 33, 5968–5979 (2020)
10.
Zurück zum Zitat Setzu, M., Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., Giannotti, F.: GLocalX - from local to global explanations of black box AI models. Artif. Intell. 294, 103457 (2021) Setzu, M., Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., Giannotti, F.: GLocalX - from local to global explanations of black box AI models. Artif. Intell. 294, 103457 (2021)
11.
Zurück zum Zitat Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system (2016,8) Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system (2016,8)
12.
Zurück zum Zitat Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51 (2018) Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51 (2018)
13.
Zurück zum Zitat Boström, H., Gurung, R., Lindgren, T., Johansson, U.: Explaining random forest predictions with association rules. Arch. Data Sci. Ser. A (Online First). 5, A05, 20 S. online (2018) Boström, H., Gurung, R., Lindgren, T., Johansson, U.: Explaining random forest predictions with association rules. Arch. Data Sci. Ser. A (Online First). 5, A05, 20 S. online (2018)
14.
Zurück zum Zitat Bénard, C., Biau, G., Veiga, S., Scornet, E.: Interpretable random forests via rule extraction. In: Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, vol. 130, pp. 937–945, 13 April 2021 Bénard, C., Biau, G., Veiga, S., Scornet, E.: Interpretable random forests via rule extraction. In: Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, vol. 130, pp. 937–945, 13 April 2021
15.
Zurück zum Zitat Friedman, J., Popescu, B.: Predictive learning via rule ensembles. Ann. Appl. Stat. 2, 916–954 (2008) Friedman, J., Popescu, B.: Predictive learning via rule ensembles. Ann. Appl. Stat. 2, 916–954 (2008)
16.
Zurück zum Zitat Ribeiro, M., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. In: ICML Workshop on Human Interpretability in Machine Learning (WHI) (2016) Ribeiro, M., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. In: ICML Workshop on Human Interpretability in Machine Learning (WHI) (2016)
18.
Zurück zum Zitat Kliegr, T., Bahník, Š, Fürnkranz, J.: A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif. Intell. 295, 103458 (2021)MathSciNetCrossRefMATH Kliegr, T., Bahník, Š, Fürnkranz, J.: A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif. Intell. 295, 103458 (2021)MathSciNetCrossRefMATH
19.
Zurück zum Zitat Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153, 6 August 2017 Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153, 6 August 2017
20.
Zurück zum Zitat Wang, Z., et al.: CNN explainer: learning convolutional neural networks with interactive visualization. IEEE Trans. Visual. Comput. Graph. (TVCG) (2020) Wang, Z., et al.: CNN explainer: learning convolutional neural networks with interactive visualization. IEEE Trans. Visual. Comput. Graph. (TVCG) (2020)
21.
Zurück zum Zitat Turmeaux, T., Salleb, A., Vrain, C., Cassard, D.: Learning characteristic rules relying on quantified paths. In: Knowledge Discovery in Databases: PKDD 2003, 7th European Conference On Principles and Practice of Knowledge Discovery in Databases, Cavtat-Dubrovnik, Croatia, 22–26 September 2003, Proceedings, vol. 2838, pp. 471–482 (2003) Turmeaux, T., Salleb, A., Vrain, C., Cassard, D.: Learning characteristic rules relying on quantified paths. In: Knowledge Discovery in Databases: PKDD 2003, 7th European Conference On Principles and Practice of Knowledge Discovery in Databases, Cavtat-Dubrovnik, Croatia, 22–26 September 2003, Proceedings, vol. 2838, pp. 471–482 (2003)
23.
Zurück zum Zitat Cohen, W.: Fast effective rule induction. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 115–123 (1995) Cohen, W.: Fast effective rule induction. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 115–123 (1995)
26.
Zurück zum Zitat Friedman, M.: A correction: the use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 34, 109–109 (1939) Friedman, M.: A correction: the use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 34, 109–109 (1939)
27.
Zurück zum Zitat Nemenyi, P.: Distribution-Free Multiple Comparisons. Princeton University (1963) Nemenyi, P.: Distribution-Free Multiple Comparisons. Princeton University (1963)
29.
Zurück zum Zitat Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: AAAI/ACM Conference on AI, Ethics, and Society (AIES) (2020) Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: AAAI/ACM Conference on AI, Ethics, and Society (AIES) (2020)
30.
Zurück zum Zitat Loyola-González, O.: Black-box vs. white-box: understanding their advantages and weaknesses from a practical point of view. IEEE Access 7, 154096–154113 (2019) Loyola-González, O.: Black-box vs. white-box: understanding their advantages and weaknesses from a practical point of view. IEEE Access 7, 154096–154113 (2019)
Metadaten
Titel
Explaining Predictions by Characteristic Rules
verfasst von
Amr Alkhatib
Henrik Boström
Michalis Vazirgiannis
Copyright-Jahr
2023
DOI
https://doi.org/10.1007/978-3-031-26387-3_24

Premium Partner