Skip to main content
Erschienen in: World Wide Web 6/2022

08.02.2022

GuidedWalk

Graph embedding with semi-supervised random walk

verfasst von: Mohsen Fazaeli, Saeedeh Momtazi

Erschienen in: World Wide Web | Ausgabe 6/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

General-purpose embeddings do not guarantee to produce the best representation for the target tasks. In the graph domain, random walk-based graph embedding methods like DeepWalk and Node2Vec are widely used methods that generate general-purpose embedding. These methods, however, can not achieve high accuracy in different tasks, including node classification. In contrast, semi-supervised methods such as Graph Convolutional Network (GCN), Graph Attention Network (GAT), and their extensions achieve state-of-the-art performance in this task as they use few labels for learning the most appropriate embedding. These methods, however, depend on node features and cannot achieve high performance in the absence of node features or low dimension features. In this paper, we propose GuidedWalk, a semi-supervised random walk-based graph embedding method that can outperform other random walk-based competitors as well as GNNs in the semi-supervised setting on graphs without node features. The proposed model works based on exploring graph paths with more emphasis on the paths that connect nodes of the same class. We show that the neural processing core of DeepWalk and Node2Vec propagates latent features across sampled paths. Therefore, our selected paths increase the chances of propagating the right latent features, which is appropriate for the node classification task. Our experiments on Cora, Pubmed, Twitch DE, and Facebook datasets, show 0.53%, 0.78%, 5.07%, and 7.13% improvement in node classification accuracy compared to the state-of-the-art techniques in the field.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
In other words, assume that all nodes have the same degree and only 20% of nodes are labeled. Therefore, the probability that an edge connects two nodes with known labels is 0.2 × 0.2 = 0.04. The probability grows to 0.25% if 50% of node labels are used in the training phase, which is not enough again. Therefore, there are many cases that we can not decide to overstate or understate the edge importance, and the GuidedWalk performance drops to DeepWalk.
 
2
Each tuple in probDist contains a node and its corresponding probability of selection.
 
3
Considering more than two vectors will be reduced to these conditions.
 
4
This is in contradiction to the definition of Skip-gram.
 
6
This experiments cannot be performed on GNN methods, since they produce probability distribution rather than embedding.
 
Literatur
1.
Zurück zum Zitat Abu-El-Haija, S, Perozzi, B, Kapoor, A, Harutyunyan, H, Alipourfard, N, Lerman, K, Steeg, GV, Galstyan, A: Mixhop: higher-order graph convolution architectures via sparsified neighborhood mixing. In: International conference on machine learning (ICML) (2019) Abu-El-Haija, S, Perozzi, B, Kapoor, A, Harutyunyan, H, Alipourfard, N, Lerman, K, Steeg, GV, Galstyan, A: Mixhop: higher-order graph convolution architectures via sparsified neighborhood mixing. In: International conference on machine learning (ICML) (2019)
2.
Zurück zum Zitat Abu-El-Haija, S, Perozzi, B, Kapoor, A, Lee, J: N-gcn: multi-scale graph convolutionfor semi-supervised node classification. In: Conference on uncertainty in artificial intelligence (UAI) (2019) Abu-El-Haija, S, Perozzi, B, Kapoor, A, Lee, J: N-gcn: multi-scale graph convolutionfor semi-supervised node classification. In: Conference on uncertainty in artificial intelligence (UAI) (2019)
3.
Zurück zum Zitat Allab, K, Labiod, L, Nadif, M: A semi-nmf-pca unified framework for data clustering. IEEE Trans Knowl Data Eng 29(1), 2–16 (2016)CrossRef Allab, K, Labiod, L, Nadif, M: A semi-nmf-pca unified framework for data clustering. IEEE Trans Knowl Data Eng 29(1), 2–16 (2016)CrossRef
4.
Zurück zum Zitat Balasubramanian, M, Schwartz, EL: The isomap algorithm and topological stability. Science 295(5552), 7–7 (2002)CrossRef Balasubramanian, M, Schwartz, EL: The isomap algorithm and topological stability. Science 295(5552), 7–7 (2002)CrossRef
5.
Zurück zum Zitat Balcilar, M, Renton, G, Héroux, P, Gaüzère, B, Adam, S, Honeine, P: Bridging the gap between spectral and spatial domains in graph neural networks. arXiv:200311702 (2020) Balcilar, M, Renton, G, Héroux, P, Gaüzère, B, Adam, S, Honeine, P: Bridging the gap between spectral and spatial domains in graph neural networks. arXiv:200311702 (2020)
6.
Zurück zum Zitat Bandyopadhyay, S, Kara, H, Biswas, A, Murty, MN: Sac2vec: information network representation with structure and content. arXiv:abs/1804.10363 (2018) Bandyopadhyay, S, Kara, H, Biswas, A, Murty, MN: Sac2vec: information network representation with structure and content. arXiv:abs/​1804.​10363 (2018)
7.
Zurück zum Zitat Bandyopadhyay, S, Biswas, A, Kara, H, Murty, M: A multilayered informative random walk for attributed social network embedding. Front Artif Intell Applic 325, 1738–1745 (2020) Bandyopadhyay, S, Biswas, A, Kara, H, Murty, M: A multilayered informative random walk for attributed social network embedding. Front Artif Intell Applic 325, 1738–1745 (2020)
8.
Zurück zum Zitat Bandyopadhyay, S, Biswas, A, Kara, H, Murty, MN: A multilayered informative random walk for attributed social network embedding. In: ECAI (2020) Bandyopadhyay, S, Biswas, A, Kara, H, Murty, MN: A multilayered informative random walk for attributed social network embedding. In: ECAI (2020)
9.
Zurück zum Zitat Bengio, Y, Courville, A, Vincent, P: Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8), 1798–1828 (2013)CrossRef Bengio, Y, Courville, A, Vincent, P: Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8), 1798–1828 (2013)CrossRef
10.
Zurück zum Zitat Cai, D, He, X, Han, J: Spectral regression: a unified subspace learning framework for content-based image retrieval. In: Proceedings of the 15th ACM international conference on multimedia. ACM, pp 403–412 (2007) Cai, D, He, X, Han, J: Spectral regression: a unified subspace learning framework for content-based image retrieval. In: Proceedings of the 15th ACM international conference on multimedia. ACM, pp 403–412 (2007)
11.
Zurück zum Zitat Cai, H, Zheng, V, Chang, K: A comprehensive survey of graph embedding: problems, techniques, and applications. IEEE Trans Knowl Data Eng 30, 1616–1637 (2018)CrossRef Cai, H, Zheng, V, Chang, K: A comprehensive survey of graph embedding: problems, techniques, and applications. IEEE Trans Knowl Data Eng 30, 1616–1637 (2018)CrossRef
12.
Zurück zum Zitat Cao, S, Lu, W, Xu, Q: Grarep: learning graph representations with global structural information. In: Proceedings of the ACM international on conference on information and knowledge management (CIKM). ACM, New York, NY, USA, CIKM ’15, pp 891–900. https://doi.org/10.1145/2806416.2806512 (2015) Cao, S, Lu, W, Xu, Q: Grarep: learning graph representations with global structural information. In: Proceedings of the ACM international on conference on information and knowledge management (CIKM). ACM, New York, NY, USA, CIKM ’15, pp 891–900. https://​doi.​org/​10.​1145/​2806416.​2806512 (2015)
13.
Zurück zum Zitat Cao, S, Lu, W, Xu, Q: Deep neural networks for learning graph representations. In: Proceedings of the association for the advancement of artificial intelligence (AAAI) (2016) Cao, S, Lu, W, Xu, Q: Deep neural networks for learning graph representations. In: Proceedings of the association for the advancement of artificial intelligence (AAAI) (2016)
14.
Zurück zum Zitat Chen, M, Wei, Z, Huang, Z, Ding, B, Li, Y: Simple and deep graph convolutional networks. In: III, HD, Singh, A (eds.) Proceedings of the 37th international conference on machine learning, PMLR, proceedings of machine learning research, vol. 119, pp 1725–1735 (2020) Chen, M, Wei, Z, Huang, Z, Ding, B, Li, Y: Simple and deep graph convolutional networks. In: III, HD, Singh, A (eds.) Proceedings of the 37th international conference on machine learning, PMLR, proceedings of machine learning research, vol. 119, pp 1725–1735 (2020)
15.
Zurück zum Zitat Chiang, WL, Liu, X, Si, S, Li, Y, Bengio, S, Hsieh, C: Cluster-gcn: an efficient algorithm for training deep and large graph convolutional networks. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (2019) Chiang, WL, Liu, X, Si, S, Li, Y, Bengio, S, Hsieh, C: Cluster-gcn: an efficient algorithm for training deep and large graph convolutional networks. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (2019)
16.
Zurück zum Zitat Duvenaud, DK, Maclaurin, D, Iparraguirre, J, Bombarell, R, Hirzel, T, Aspuru-Guzik, A, Adams, RP: Convolutional networks on graphs for learning molecular fingerprints. In: Cortes, C, Lawrence, ND, Lee, DD, Sugiyama, M, Garnett, R (eds.) Advances in neural information processing systems (NIPS), pp 2224–2232. Curran Associates Inc. (2015) Duvenaud, DK, Maclaurin, D, Iparraguirre, J, Bombarell, R, Hirzel, T, Aspuru-Guzik, A, Adams, RP: Convolutional networks on graphs for learning molecular fingerprints. In: Cortes, C, Lawrence, ND, Lee, DD, Sugiyama, M, Garnett, R (eds.) Advances in neural information processing systems (NIPS), pp 2224–2232. Curran Associates Inc. (2015)
17.
Zurück zum Zitat Faerman, E, Borutta, F, Fountoulakis, K, Mahoney, M: Lasagne: locality and structure aware graph node embedding. In: 2018 IEEE/WIC/ACM International conference on web intelligence (WI), pp 246–253 (2018) Faerman, E, Borutta, F, Fountoulakis, K, Mahoney, M: Lasagne: locality and structure aware graph node embedding. In: 2018 IEEE/WIC/ACM International conference on web intelligence (WI), pp 246–253 (2018)
18.
Zurück zum Zitat Fey, M, Lenssen, JE: Fast graph representation learning with PyTorch geometric. In: ICLR Workshop on representation learning on graphs and manifolds (2019) Fey, M, Lenssen, JE: Fast graph representation learning with PyTorch geometric. In: ICLR Workshop on representation learning on graphs and manifolds (2019)
19.
Zurück zum Zitat Frasca, F, Rossi, E, Eynard, D, Chamberlain, B, Bronstein, M, Monti, F: Sign: scalable inception graph neural networks. In: ICML 2020 Workshop on graph representation learning and beyond (2020) Frasca, F, Rossi, E, Eynard, D, Chamberlain, B, Bronstein, M, Monti, F: Sign: scalable inception graph neural networks. In: ICML 2020 Workshop on graph representation learning and beyond (2020)
20.
Zurück zum Zitat Goyal, P, Ferrara, E: Graph embedding techniques, applications, and performance: a survey. Knowl Based Syst 151, 78–94 (2018)CrossRef Goyal, P, Ferrara, E: Graph embedding techniques, applications, and performance: a survey. Knowl Based Syst 151, 78–94 (2018)CrossRef
21.
Zurück zum Zitat Grover, A, Leskovec, J: node2vec: scalable feature learning for networks. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 855–864 (2016) Grover, A, Leskovec, J: node2vec: scalable feature learning for networks. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 855–864 (2016)
22.
Zurück zum Zitat He, X, Niyogi, P: Locality preserving projections. In: Advances in neural information processing systems, pp 153–160 (2004) He, X, Niyogi, P: Locality preserving projections. In: Advances in neural information processing systems, pp 153–160 (2004)
23.
Zurück zum Zitat Hofmann, T, Buhmann, J: Multidimensional scaling and data clustering. In: Advances in neural information processing systems, pp 459–466 (1995) Hofmann, T, Buhmann, J: Multidimensional scaling and data clustering. In: Advances in neural information processing systems, pp 459–466 (1995)
24.
Zurück zum Zitat Hu, F, Zhu, Y, Wu, S, Huang, W, Wang, L, Tan, T: Graphair: graph representation learning with neighborhood aggregation and interaction. arXiv:abs/1911.01731 (2019) Hu, F, Zhu, Y, Wu, S, Huang, W, Wang, L, Tan, T: Graphair: graph representation learning with neighborhood aggregation and interaction. arXiv:abs/​1911.​01731 (2019)
25.
Zurück zum Zitat Hu, F, Zhu, Y, Wu, S, Wang, L, Tan, T: Hierarchical graph convolutional networks for semi-supervised node classification. In: IJCAI (2019) Hu, F, Zhu, Y, Wu, S, Wang, L, Tan, T: Hierarchical graph convolutional networks for semi-supervised node classification. In: IJCAI (2019)
26.
Zurück zum Zitat Kipf, TN, Welling, M: Semi-supervised classification with graph convolutional networks. In: International conference on learning representations (ICLR) (2017) Kipf, TN, Welling, M: Semi-supervised classification with graph convolutional networks. In: International conference on learning representations (ICLR) (2017)
27.
Zurück zum Zitat Kolouri, S, Naderializadeh, N, Rohde, GK, Hoffmann, H: Wasserstein embedding for graph learning. In: International conference on learning representations (2021) Kolouri, S, Naderializadeh, N, Rohde, GK, Hoffmann, H: Wasserstein embedding for graph learning. In: International conference on learning representations (2021)
28.
Zurück zum Zitat Kong, K, Li, G, Ding, M, Wu, Z, Zhu, C, Ghanem, B, Taylor, G, Goldstein, T: Flag: adversarial data augmentation for graph neural networks. ArXiv:abs/2010.09891 (2020) Kong, K, Li, G, Ding, M, Wu, Z, Zhu, C, Ghanem, B, Taylor, G, Goldstein, T: Flag: adversarial data augmentation for graph neural networks. ArXiv:abs/​2010.​09891 (2020)
29.
Zurück zum Zitat Lu, Q, Getoor, L: Link-based classification. In: Proceedings of the twentieth international conference on international conference on machine learning. AAAI Press, ICML’03, pp 496–503 (2003) Lu, Q, Getoor, L: Link-based classification. In: Proceedings of the twentieth international conference on international conference on machine learning. AAAI Press, ICML’03, pp 496–503 (2003)
30.
Zurück zum Zitat Mikolov, T, Chen, K, Corrado, GS, Dean, J: Efficient estimation of word representations in vector space. arXiv:abs/1301.3781 (2013) Mikolov, T, Chen, K, Corrado, GS, Dean, J: Efficient estimation of word representations in vector space. arXiv:abs/​1301.​3781 (2013)
31.
Zurück zum Zitat Mikolov, T, Sutskever, I, Chen, K, Corrado, GS, Dean, J: Distributed representations of words and phrases and their compositionality. In: Burges, C J C, Bottou, L, Welling, M, Ghahramani, Z, Weinberger, K Q (eds.) Advances in neural information processing systems (NIPS), pp 3111–3119. Curran Associates, Inc. (2013) Mikolov, T, Sutskever, I, Chen, K, Corrado, GS, Dean, J: Distributed representations of words and phrases and their compositionality. In: Burges, C J C, Bottou, L, Welling, M, Ghahramani, Z, Weinberger, K Q (eds.) Advances in neural information processing systems (NIPS), pp 3111–3119. Curran Associates, Inc. (2013)
32.
Zurück zum Zitat Mitra, A, Vijayan, P, Parthasarathy, S, Ravindran, B: A unified non-negative matrix factorization framework for semi supervised learning on graphs. In: SDM (2020) Mitra, A, Vijayan, P, Parthasarathy, S, Ravindran, B: A unified non-negative matrix factorization framework for semi supervised learning on graphs. In: SDM (2020)
33.
Zurück zum Zitat Namata, G, London, B, Getoor, L, Huang, B, Edu, U: Query-driven active surveying for collective classification. In: 10th International workshop on mining and learning with graphs (2012) Namata, G, London, B, Getoor, L, Huang, B, Edu, U: Query-driven active surveying for collective classification. In: 10th International workshop on mining and learning with graphs (2012)
34.
Zurück zum Zitat Pan, S, Hu, R, Long, G, Jiang, J, Yao, L, Zhang, C: Adversarially regularized graph autoencoder for graph embedding. In: IJCAI (2018) Pan, S, Hu, R, Long, G, Jiang, J, Yao, L, Zhang, C: Adversarially regularized graph autoencoder for graph embedding. In: IJCAI (2018)
35.
Zurück zum Zitat Peng, H, Wang, H, Du, B, Bhuiyan, MZA, Ma, H, Liu, J, Wang, L, Yang, Z, Du, L, Wang, S, Yu, PS: Spatial temporal incidence dynamic graph neural networks for traffic flow forecasting. Inform Sci 521, 277–290 (2020)CrossRef Peng, H, Wang, H, Du, B, Bhuiyan, MZA, Ma, H, Liu, J, Wang, L, Yang, Z, Du, L, Wang, S, Yu, PS: Spatial temporal incidence dynamic graph neural networks for traffic flow forecasting. Inform Sci 521, 277–290 (2020)CrossRef
36.
Zurück zum Zitat Perozzi, B, Al-Rfou, R, Skiena, S: Deepwalk: online learning of social representations. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 701–710 (2014) Perozzi, B, Al-Rfou, R, Skiena, S: Deepwalk: online learning of social representations. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 701–710 (2014)
37.
Zurück zum Zitat Qiu, J, Dong, Y, Ma, H, Li, J, Wang, K, Tang, J: Network embedding as matrix factorization: unifying deepwalk, line, pte, and node2vec. In: Proceedings of the eleventh ACM international conference on web search and data mining. ACM, pp 459–467 (2018) Qiu, J, Dong, Y, Ma, H, Li, J, Wang, K, Tang, J: Network embedding as matrix factorization: unifying deepwalk, line, pte, and node2vec. In: Proceedings of the eleventh ACM international conference on web search and data mining. ACM, pp 459–467 (2018)
38.
Zurück zum Zitat Reilly, EP, Johnson, EC, Hughes, MJ, Ramsden, D, Park, L, Wester, B, Gray-Roncal, W: Connecting neural reconstruction integrity (nri) to graph metrics and biological priors. In: Barbosa, H, Gomez-Gardenes, J, Gona̧lves, B, Mangioni, G, Menezes, R, Oliveira, M (eds.) Complex networks XI, pp 182–193. Springer International Publishing, Cham (2020) Reilly, EP, Johnson, EC, Hughes, MJ, Ramsden, D, Park, L, Wester, B, Gray-Roncal, W: Connecting neural reconstruction integrity (nri) to graph metrics and biological priors. In: Barbosa, H, Gomez-Gardenes, J, Gona̧lves, B, Mangioni, G, Menezes, R, Oliveira, M (eds.) Complex networks XI, pp 182–193. Springer International Publishing, Cham (2020)
39.
Zurück zum Zitat Ribeiro, LF, Saverese, PH, Figueiredo, DR: struc2vec: learning node representations from structural identity. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 385–394 (2017) Ribeiro, LF, Saverese, PH, Figueiredo, DR: struc2vec: learning node representations from structural identity. In: Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 385–394 (2017)
40.
Zurück zum Zitat Rozemberczki, B, Allen, C, Sarkar, R: Multi-scale attributed node embedding (2019) Rozemberczki, B, Allen, C, Sarkar, R: Multi-scale attributed node embedding (2019)
41.
Zurück zum Zitat Salha, G, Hennequin, R, Vazirgiannis, M: Simple and effective graph autoencoders with one-hop linear models. arXiv:abs/2001.07614(2020) Salha, G, Hennequin, R, Vazirgiannis, M: Simple and effective graph autoencoders with one-hop linear models. arXiv:abs/​2001.​07614(2020)
42.
Zurück zum Zitat Sen, P, Namata, G, Bilgic, M, Getoor, L, Galligher, B, Eliassi-Rad, T: Collective classification in network data. AI Mag 29(3), 93 (2008) Sen, P, Namata, G, Bilgic, M, Getoor, L, Galligher, B, Eliassi-Rad, T: Collective classification in network data. AI Mag 29(3), 93 (2008)
43.
Zurück zum Zitat Tan, Q, Liu, N, Zhao, X, Yang, H, Zhou, J, Hu, X: Learning to hash with graph neural networks for recommender systems. In: Proceedings of the Web conference 2020, Association for Computing Machinery, New York, NY, USA, WWW ’20, pp 1988–1998 (2020) Tan, Q, Liu, N, Zhao, X, Yang, H, Zhou, J, Hu, X: Learning to hash with graph neural networks for recommender systems. In: Proceedings of the Web conference 2020, Association for Computing Machinery, New York, NY, USA, WWW ’20, pp 1988–1998 (2020)
44.
Zurück zum Zitat Tang, J, Qu, M, Wang, M, Zhang, M, Yan, J, Mei, Q: LINE: large-scale information network embedding. In: Proceedings of the 24th international conference on World Wide Web (WWW), pp 1067–1077 (2015) Tang, J, Qu, M, Wang, M, Zhang, M, Yan, J, Mei, Q: LINE: large-scale information network embedding. In: Proceedings of the 24th international conference on World Wide Web (WWW), pp 1067–1077 (2015)
45.
Zurück zum Zitat Tian, F, Gao, B, Cui, Q, Chen, E, Liu, TM: Learning deep representations for graph clustering. In: Proceedings of the association for the advancement of artificial intelligence (AAAI) (2014) Tian, F, Gao, B, Cui, Q, Chen, E, Liu, TM: Learning deep representations for graph clustering. In: Proceedings of the association for the advancement of artificial intelligence (AAAI) (2014)
46.
Zurück zum Zitat Veličković, P, Cucurull, G, Casanova, A, Romero, A, Lió, P, Bengio, Y: Graph attention networks. International Conference on Learning Representations (2018) Veličković, P, Cucurull, G, Casanova, A, Romero, A, Lió, P, Bengio, Y: Graph attention networks. International Conference on Learning Representations (2018)
47.
Zurück zum Zitat Verma, V, Qu, M, Lamb, A, Bengio, Y, Kannala, J, Tang, J: Graphmix: improved training of graph neural networks for semi-supervised learning. ArXiv (2020) Verma, V, Qu, M, Lamb, A, Bengio, Y, Kannala, J, Tang, J: Graphmix: improved training of graph neural networks for semi-supervised learning. ArXiv (2020)
48.
49.
Zurück zum Zitat Wang, X, Cui, P, Wang, J, Pei, J, Zhu, W, Yang, S: Community preserving network embedding. In: AAAI, vol 17, pp 3298239–3298270 (2017) Wang, X, Cui, P, Wang, J, Pei, J, Zhu, W, Yang, S: Community preserving network embedding. In: AAAI, vol 17, pp 3298239–3298270 (2017)
50.
Zurück zum Zitat Wang, Y, Sun, H, Zhao, Y, Zhou, W, Zhu, SF: A heterogeneous graph embedding framework for location-based social network analysis in smart cities. IEEE Trans Industr Inform 16, 2747–2755 (2020) Wang, Y, Sun, H, Zhao, Y, Zhou, W, Zhu, SF: A heterogeneous graph embedding framework for location-based social network analysis in smart cities. IEEE Trans Industr Inform 16, 2747–2755 (2020)
51.
Zurück zum Zitat Wu, Z, Pan, S, Chen, F, Long, G, Zhang, C, Yu, PS: A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems (2020) Wu, Z, Pan, S, Chen, F, Long, G, Zhang, C, Yu, PS: A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems (2020)
52.
Zurück zum Zitat Xu, K, Hu, W, Leskovec, J, Jegelka, S: How powerful are graph neural networks? In: International conference on learning representations (2019) Xu, K, Hu, W, Leskovec, J, Jegelka, S: How powerful are graph neural networks? In: International conference on learning representations (2019)
53.
Zurück zum Zitat Yang, C, Liu, Z, Zhao, D, Sun, M, Chang, EY: Network representation learning with rich text information. In: Proceedings of the 24th international conference on artificial intelligence. AAAI Press, IJCAI’15, pp 2111–2117 (2015) Yang, C, Liu, Z, Zhao, D, Sun, M, Chang, EY: Network representation learning with rich text information. In: Proceedings of the 24th international conference on artificial intelligence. AAAI Press, IJCAI’15, pp 2111–2117 (2015)
54.
Zurück zum Zitat Yang, Z, Cohen, W, Salakhudinov, R: Revisiting semi-supervised learning with graph embeddings. In: International conference on machine learning, PMLR, pp 40–48 (2016) Yang, Z, Cohen, W, Salakhudinov, R: Revisiting semi-supervised learning with graph embeddings. In: International conference on machine learning, PMLR, pp 40–48 (2016)
55.
Zurück zum Zitat Zhang, J, Meng, L: Gresnet: graph residual network for reviving deep gnns from suspended animation. arXiv:abs/1909.05729 (2019) Zhang, J, Meng, L: Gresnet: graph residual network for reviving deep gnns from suspended animation. arXiv:abs/​1909.​05729 (2019)
56.
Zurück zum Zitat Zhao, T, Liu, Y, Neves, L, Woodford, OJ, Jiang, M, Shah, N: Data augmentation for graph neural networks. arXiv:abs/2006.06830(2020) Zhao, T, Liu, Y, Neves, L, Woodford, OJ, Jiang, M, Shah, N: Data augmentation for graph neural networks. arXiv:abs/​2006.​06830(2020)
57.
Zurück zum Zitat Zhou, X, Sun, J, Li, G, Feng, J: Query performance prediction for concurrent queries using graph embedding. Proc VLDB Endowment 13, 1416–1428 (2020)CrossRef Zhou, X, Sun, J, Li, G, Feng, J: Query performance prediction for concurrent queries using graph embedding. Proc VLDB Endowment 13, 1416–1428 (2020)CrossRef
58.
Metadaten
Titel
GuidedWalk
Graph embedding with semi-supervised random walk
verfasst von
Mohsen Fazaeli
Saeedeh Momtazi
Publikationsdatum
08.02.2022
Verlag
Springer US
Erschienen in
World Wide Web / Ausgabe 6/2022
Print ISSN: 1386-145X
Elektronische ISSN: 1573-1413
DOI
https://doi.org/10.1007/s11280-021-00999-9

Weitere Artikel der Ausgabe 6/2022

World Wide Web 6/2022 Zur Ausgabe

Premium Partner