Skip to main content
Top

2021 | OriginalPaper | Chapter

GraphSVX: Shapley Value Explanations for Graph Neural Networks

Authors : Alexandre Duval, Fragkiskos D. Malliaros

Published in: Machine Learning and Knowledge Discovery in Databases. Research Track

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data due to the incorporation of graph structure into the learning of node representations, which renders their comprehension challenging. In this paper, we first propose a unified framework satisfied by most existing GNN explainers. Then, we introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs. GraphSVX is a decomposition technique that captures the “fair” contribution of each feature and node towards the explained prediction by constructing a surrogate model on a perturbed dataset. It extends to graphs and ultimately provides as explanation the Shapley Values from game theory. Experiments on real-world and synthetic datasets demonstrate that GraphSVX achieves state-of-the-art performance compared to baseline models while presenting core theoretical and human-centric properties.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
Axiom: If \(\forall S \in \mathcal {P}(\{1,\ldots ,p\}) \text { and } j \notin S\), \({\textit{val}}(S\cup \{j\}) = {\textit{val}}(S)\), \(\text { then } \phi _j({\textit{val}}) = 0\).
 
Literature
1.
go back to reference Backstrom, L., Leskovec, J.: Supervised random walks: predicting and recommending links in social networks. In: WSDM (2011) Backstrom, L., Leskovec, J.: Supervised random walks: predicting and recommending links in social networks. In: WSDM (2011)
2.
go back to reference Baldassarre, F., Azizpour, H.: Explainability techniques for graph convolutional networks. arXiv (2019) Baldassarre, F., Azizpour, H.: Explainability techniques for graph convolutional networks. arXiv (2019)
3.
go back to reference Battaglia, P.W., Hamrick, J.B., Bapst, V., Sanchez-Gonzalez, A., et al.: Relational inductive biases, deep learning, and graph networks. arXiv (2018) Battaglia, P.W., Hamrick, J.B., Bapst, V., Sanchez-Gonzalez, A., et al.: Relational inductive biases, deep learning, and graph networks. arXiv (2018)
4.
go back to reference Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. JAIR 70, 245–317 (2021) Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. JAIR 70, 245–317 (2021)
5.
go back to reference Dabkowski, P., Gal, Y.: Real time image saliency for black box classifiers. In: NeurIPS (2017) Dabkowski, P., Gal, Y.: Real time image saliency for black box classifiers. In: NeurIPS (2017)
6.
go back to reference Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: IEEE Symposium on Security and Privacy (SP) (2016) Datta, A., Sen, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: IEEE Symposium on Security and Privacy (SP) (2016)
7.
go back to reference Debnath, K., et al.: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity. J. Med. Chem. 34(2), 786–797 (1991) Debnath, K., et al.: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity. J. Med. Chem. 34(2), 786–797 (1991)
8.
go back to reference Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: NeurIPS (2016) Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. In: NeurIPS (2016)
9.
10.
go back to reference Duvenaud, D., et al.: Convolutional networks on graphs for learning molecular fingerprints. In: NeurIPS (2015) Duvenaud, D., et al.: Convolutional networks on graphs for learning molecular fingerprints. In: NeurIPS (2015)
11.
go back to reference Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017) Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017)
12.
go back to reference Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., Chang, Y.: GraphLIME: local interpretable model explanations for graph neural networks. arXiv (2020) Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., Chang, Y.: GraphLIME: local interpretable model explanations for graph neural networks. arXiv (2020)
13.
go back to reference Lipovetsky, S., Conklin, M.: Analysis of regression in game theory approach. ASMBI 17(4), 319–330 (2001) Lipovetsky, S., Conklin, M.: Analysis of regression in game theory approach. ASMBI 17(4), 319–330 (2001)
14.
go back to reference Lipton, P.: Contrastive explanation. Roy. Inst. Philos. Suppl. 27, 247–266 (1990) Lipton, P.: Contrastive explanation. Roy. Inst. Philos. Suppl. 27, 247–266 (1990)
15.
go back to reference Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: NeurIPS (2017) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: NeurIPS (2017)
16.
go back to reference Luo, D., et al.: Parameterized explainer for graph neural network. In: NeurIPS (2020) Luo, D., et al.: Parameterized explainer for graph neural network. In: NeurIPS (2020)
17.
go back to reference Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019) Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
18.
go back to reference Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv (2017) Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv (2017)
20.
go back to reference O’neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books (2016) O’neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books (2016)
21.
go back to reference Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., Hoffmann, H.: Explainability methods for graph convolutional neural networks. In: CVPR (2019) Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., Hoffmann, H.: Explainability methods for graph convolutional neural networks. In: CVPR (2019)
22.
go back to reference Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? Explaining the predictions of any classifier. In: KDD (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? Explaining the predictions of any classifier. In: KDD (2016)
25.
go back to reference Saltelli, A.: Sensitivity analysis for importance assessment. Risk Anal. 22(3), 579–590 (2002) Saltelli, A.: Sensitivity analysis for importance assessment. Risk Anal. 22(3), 579–590 (2002)
26.
go back to reference Schlichtkrull, M.S., Cao, N.D., Titov, I.: Interpreting graph neural networks for NLP with differentiable edge masking. In: ICLR (2021) Schlichtkrull, M.S., Cao, N.D., Titov, I.: Interpreting graph neural networks for NLP with differentiable edge masking. In: ICLR (2021)
27.
go back to reference Selvaraju, R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV (2017) Selvaraju, R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV (2017)
28.
go back to reference Shapley, L.S.: A value for n-person games. Contrib. Theory Games 2(28), 307–317 (1953) Shapley, L.S.: A value for n-person games. Contrib. Theory Games 2(28), 307–317 (1953)
29.
go back to reference Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: ICML (2017) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: ICML (2017)
30.
go back to reference Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. JMLR 11, 1–18 (2010) Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. JMLR 11, 1–18 (2010)
31.
go back to reference Ustun, B., Rudin, C.: Methods and models for interpretable linear classification. arXiv (2014) Ustun, B., Rudin, C.: Methods and models for interpretable linear classification. arXiv (2014)
32.
go back to reference Vu, M.N., Thai, M.T.: PGM-explainer: probabilistic graphical model explanations for graph neural networks. In: NeurIPS (2020) Vu, M.N., Thai, M.T.: PGM-explainer: probabilistic graphical model explanations for graph neural networks. In: NeurIPS (2020)
33.
go back to reference Ying, Z., Bourgeois, D., You, J., Zitnik, M., Leskovec, J.: GNNExplainer: generating explanations for graph neural networks. In: NeurIPS (2019) Ying, Z., Bourgeois, D., You, J., Zitnik, M., Leskovec, J.: GNNExplainer: generating explanations for graph neural networks. In: NeurIPS (2019)
34.
go back to reference Yuan, H., Tang, J., Hu, X., Ji, S.: XGNN: towards model-level explanations of graph neural networks. In: KDD (2020) Yuan, H., Tang, J., Hu, X., Ji, S.: XGNN: towards model-level explanations of graph neural networks. In: KDD (2020)
35.
36.
go back to reference Zhang, M., Chen, Y.: Link prediction based on graph neural networks (2018) Zhang, M., Chen, Y.: Link prediction based on graph neural networks (2018)
37.
go back to reference Zhou, J., et al.: Graph neural networks: a review of methods and applications. arXiv (2018) Zhou, J., et al.: Graph neural networks: a review of methods and applications. arXiv (2018)
Metadata
Title
GraphSVX: Shapley Value Explanations for Graph Neural Networks
Authors
Alexandre Duval
Fragkiskos D. Malliaros
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-86520-7_19

Premium Partner