Skip to main content
Top

2024 | OriginalPaper | Chapter

Alleviating Over-Smoothing via Aggregation over Compact Manifolds

Authors : Dongzhuoran Zhou, Hui Yang, Bo Xiong, Yue Ma, Evgeny Kharlamov

Published in: Advances in Knowledge Discovery and Data Mining

Publisher: Springer Nature Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Graph neural networks (GNNs) have achieved significant success in various applications. Most GNNs learn the node features with information aggregation of its neighbors and feature transformation in each layer. However, the node features become indistinguishable after many layers, leading to performance deterioration: a significant limitation known as over-smoothing. Past work adopted various techniques for addressing this issue, such as normalization and skip-connection of layer-wise output. After the study, we found that the information aggregations in existing work are all contracted aggregations, with the intrinsic property that features will inevitably converge to the same single point after many layers. To this end, we propose the aggregation over compacted manifolds method (ACM) that replaces the existing information aggregation with aggregation over compact manifolds, a special type of manifold, which avoids contracted aggregations. In this work, we theoretically analyze contracted aggregation and its properties. We also provide an extensive empirical evaluation that shows ACM can effectively alleviate over-smoothing and outperforms the state-of-the-art. The code can be found in https://​github.​com/​DongzhuoranZhou/​ACM.​git.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
1
In Fig. 1b, the unit circle is used as the embedding space. The aggregation of points is defined under polar coordinates. For example, the aggregation of two points in the unit circle with polar coordinate \((1, \theta ), (1, \phi )\) is \((1, \frac{\theta +\phi }{2})\).
 
Literature
1.
go back to reference Bachmann, G., Bécigneul, G., Ganea, O.: Constant curvature graph convolutional networks. In: ICML, pp. 486–496. PMLR (2020) Bachmann, G., Bécigneul, G., Ganea, O.: Constant curvature graph convolutional networks. In: ICML, pp. 486–496. PMLR (2020)
2.
go back to reference Balazevic, I., Allen, C., Hospedales, T.M.: Multi-relational poincaré graph embeddings. In: NeurIPS, pp. 4465–4475 (2019) Balazevic, I., Allen, C., Hospedales, T.M.: Multi-relational poincaré graph embeddings. In: NeurIPS, pp. 4465–4475 (2019)
4.
go back to reference Chami, I., et al: Hyperbolic graph convolutional neural networks. In: NeurIPS, pp. 4869–4880 (2019) Chami, I., et al: Hyperbolic graph convolutional neural networks. In: NeurIPS, pp. 4869–4880 (2019)
5.
go back to reference Chen, D., Lin, Y., et al.: Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. CoRR abs/ arXiv: 1909.03211 (2019) Chen, D., Lin, Y., et al.: Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. CoRR abs/ arXiv:​ 1909.​03211 (2019)
6.
go back to reference Chen, M., Wei, Z., Huang, Z., Ding, B., Li, Y.: Simple and deep graph convolutional networks. In: ICML, pp. 1725–1735. PMLR (2020) Chen, M., Wei, Z., Huang, Z., Ding, B., Li, Y.: Simple and deep graph convolutional networks. In: ICML, pp. 1725–1735. PMLR (2020)
7.
go back to reference Chien, E., Peng, J., Li, P., Milenkovic, O.: Adaptive universal generalized pagerank graph neural network. In: ICLR (2021) Chien, E., Peng, J., Li, P., Milenkovic, O.: Adaptive universal generalized pagerank graph neural network. In: ICLR (2021)
8.
go back to reference Ganea, O., Bécigneul, G., Hofmann, T.: Hyperbolic neural networks. In: NeurIPS, pp. 5350–5360 (2018) Ganea, O., Bécigneul, G., Hofmann, T.: Hyperbolic neural networks. In: NeurIPS, pp. 5350–5360 (2018)
9.
go back to reference Gao, H., Wang, Z., Ji, S.: Large-scale learnable graph convolutional networks. In: KDD, pp. 1416–1424. ACM (2018) Gao, H., Wang, Z., Ji, S.: Large-scale learnable graph convolutional networks. In: KDD, pp. 1416–1424. ACM (2018)
10.
go back to reference Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010)
11.
go back to reference Gülçehre, Ç., et al.: Hyperbolic attention networks. In: ICLR (2019) Gülçehre, Ç., et al.: Hyperbolic attention networks. In: ICLR (2019)
12.
go back to reference Hamilton, W.L.: Graph Representation Learning. Synthesis Lect. Artifi. Intell. Mach. Learn. (2020) Hamilton, W.L.: Graph Representation Learning. Synthesis Lect. Artifi. Intell. Mach. Learn. (2020)
13.
go back to reference Hamilton, W.L., et al.: Inductive representation learning on large graphs. In: NIPS, pp. 1024–1034 (2017) Hamilton, W.L., et al.: Inductive representation learning on large graphs. In: NIPS, pp. 1024–1034 (2017)
14.
go back to reference He, K., et al.: Deep residual learning for image recognition. In: CVPR, pp. 770–778. IEEE Computer Society (2016) He, K., et al.: Deep residual learning for image recognition. In: CVPR, pp. 770–778. IEEE Computer Society (2016)
15.
go back to reference Hou, Y., Zhang, J., et al.: Measuring and improving the use of graph information in graph neural networks. In: ICLR (2020) Hou, Y., Zhang, J., et al.: Measuring and improving the use of graph information in graph neural networks. In: ICLR (2020)
16.
go back to reference Huang, W., et al.: Tackling over-smoothing for general graph convolutional networks. CoRR (2020) Huang, W., et al.: Tackling over-smoothing for general graph convolutional networks. CoRR (2020)
17.
go back to reference Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML, pp. 448–456 (2015) Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML, pp. 448–456 (2015)
18.
go back to reference Jin, W., Et al.: Feature overcorrelation in deep graph neural networks: a new perspective. In: KDD, pp. 709–719. ACM (2022) Jin, W., Et al.: Feature overcorrelation in deep graph neural networks: a new perspective. In: KDD, pp. 709–719. ACM (2022)
19.
go back to reference Khrulkov, V., Et al.: Hyperbolic image embeddings. In: CVPR, pp. 6417–6427. Computer Vision Foundation/IEEE (2020) Khrulkov, V., Et al.: Hyperbolic image embeddings. In: CVPR, pp. 6417–6427. Computer Vision Foundation/IEEE (2020)
20.
go back to reference Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
22.
go back to reference Klicpera, J., Et al.: Predict then propagate: graph neural networks meet personalized pagerank. In: ICLR (2019) Klicpera, J., Et al.: Predict then propagate: graph neural networks meet personalized pagerank. In: ICLR (2019)
23.
go back to reference . Klicpera, J., et al.: Predict then propagate: graph neural networks meet personalized pagerank. In: ICLR (2019) . Klicpera, J., et al.: Predict then propagate: graph neural networks meet personalized pagerank. In: ICLR (2019)
24.
go back to reference Lee, J.: Introduction to Smooth Manifolds. Graduate Texts in Mathematics Lee, J.: Introduction to Smooth Manifolds. Graduate Texts in Mathematics
25.
go back to reference Li, G., Müller, M., et al.: Deepgcns: can gcns go as deep as cnns? In: ICCV, pp. 9266–9275. IEEE (2019) Li, G., Müller, M., et al.: Deepgcns: can gcns go as deep as cnns? In: ICCV, pp. 9266–9275. IEEE (2019)
26.
go back to reference Li, Q., Han, Z., Wu, X.M.: Deeper insights into graph convolutional networks for semi-supervised learning. In: AAAI, pp. 3538–3545 (2018) Li, Q., Han, Z., Wu, X.M.: Deeper insights into graph convolutional networks for semi-supervised learning. In: AAAI, pp. 3538–3545 (2018)
27.
go back to reference Liu, M., Gao, H., Ji, S.: Towards deeper graph neural networks. In: KDD, pp. 338–348. ACM (2020) Liu, M., Gao, H., Ji, S.: Towards deeper graph neural networks. In: KDD, pp. 338–348. ACM (2020)
28.
go back to reference Mendelson, B.: Introduction to topology (1990) Mendelson, B.: Introduction to topology (1990)
29.
go back to reference Oono, K., Suzuki, T.: Graph neural networks exponentially lose expressive power for node classification. In: ICLR (2020) Oono, K., Suzuki, T.: Graph neural networks exponentially lose expressive power for node classification. In: ICLR (2020)
30.
go back to reference Pei, H., et al.: Geom-gcn: Geometric graph convolutional networks. In: ICLR (2020) Pei, H., et al.: Geom-gcn: Geometric graph convolutional networks. In: ICLR (2020)
31.
go back to reference Rashid, A.M., Karypis, G., et al.: Learning preferences of new users in recommender systems: an information theoretic approach. SIGKDD Explor., 90–100 (2008) Rashid, A.M., Karypis, G., et al.: Learning preferences of new users in recommender systems: an information theoretic approach. SIGKDD Explor., 90–100 (2008)
32.
go back to reference Rong, Y., Huang, W., Xu, T., Huang, J.: Dropedge: towards deep graph convolutional networks on node classification. In: ICLR (2020) Rong, Y., Huang, W., Xu, T., Huang, J.: Dropedge: towards deep graph convolutional networks on node classification. In: ICLR (2020)
33.
34.
go back to reference Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNet Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNet
35.
go back to reference Ungar, A.A.: Barycentric calculus in Euclidean and hyperbolic geometry: a comparative introduction (2010) Ungar, A.A.: Barycentric calculus in Euclidean and hyperbolic geometry: a comparative introduction (2010)
36.
37.
go back to reference Wu, F., et al.: Simplifying graph convolutional networks. In: ICML, vol. 97, pp. 6861–6871. PMLR (2019) Wu, F., et al.: Simplifying graph convolutional networks. In: ICML, vol. 97, pp. 6861–6871. PMLR (2019)
38.
go back to reference Xu, K., Hu, W., et al.: How powerful are graph neural networks? In: ICLR (2019) Xu, K., Hu, W., et al.: How powerful are graph neural networks? In: ICLR (2019)
39.
go back to reference Xu, K., Li, C., Tian, Y., et al.: Representation learning on graphs with jumping knowledge networks. In: ICML, pp. 5449–5458. PMLR (2018) Xu, K., Li, C., Tian, Y., et al.: Representation learning on graphs with jumping knowledge networks. In: ICML, pp. 5449–5458. PMLR (2018)
40.
go back to reference Yang, Z., et al.: Revisiting semi-supervised learning with graph embeddings. In: ICML. JMLR Workshop and Conference Proceedings, vol. 48, pp. 40–48 (2016) Yang, Z., et al.: Revisiting semi-supervised learning with graph embeddings. In: ICML. JMLR Workshop and Conference Proceedings, vol. 48, pp. 40–48 (2016)
41.
go back to reference Zhao, L., Akoglu, L.: Pairnorm: tackling oversmoothing in gnns. In: ICLR (2020) Zhao, L., Akoglu, L.: Pairnorm: tackling oversmoothing in gnns. In: ICLR (2020)
42.
go back to reference Zhou, J., Cui, G., et al.: Graph neural networks: a review of methods and applications. AI Open 1, 57–81 (2020)CrossRef Zhou, J., Cui, G., et al.: Graph neural networks: a review of methods and applications. AI Open 1, 57–81 (2020)CrossRef
43.
go back to reference Zhou, K., et al.: Towards deeper graph neural networks with differentiable group normalization. In: NeurIPS (2020) Zhou, K., et al.: Towards deeper graph neural networks with differentiable group normalization. In: NeurIPS (2020)
Metadata
Title
Alleviating Over-Smoothing via Aggregation over Compact Manifolds
Authors
Dongzhuoran Zhou
Hui Yang
Bo Xiong
Yue Ma
Evgeny Kharlamov
Copyright Year
2024
Publisher
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2253-2_31

Premium Partner