Skip to main content

2023 | OriginalPaper | Buchkapitel

Graph Contrastive Learning with Adaptive Augmentation for Recommendation

verfasst von : Mengyuan Jing, Yanmin Zhu, Tianzi Zang, Jiadi Yu, Feilong Tang

Erschienen in: Machine Learning and Knowledge Discovery in Databases

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Graph Convolutional Network (GCN) has been one of the most popular technologies in recommender systems, as it can effectively model high-order relationships. However, these methods usually suffer from two problems: sparse supervision signal and noisy interactions. To address these problems, graph contrastive learning is applied for GCN-based recommendation. The general framework of graph contrastive learning is first to perform data augmentation on the input graph to get two graph views and then maximize the agreement of representations in these views. Despite the effectiveness, existing methods ignore the differences in the impact of nodes and edges when performing data augmentation, which will degrade the quality of the learned representations. Meanwhile, they usually adopt manual data augmentation schemes, limiting the generalization of models. We argue that the data augmentation scheme should be learnable and adaptive to the inherent patterns in the graph structure. Thus, the model can learn representations that remain invariant to perturbations of unimportant structures while demanding fewer resources. In this work, we propose a novel Graph Contrastive learning framework with Adaptive data augmentation for Recommendation (GCARec). Specifically, for adaptive augmentation, we first calculate the retaining probability of each edge based on the attention mechanism and then sample edges according to the probability with a Gumbel Softmax. In addition, the adaptive data augmentation scheme is based on the neural network and requires no domain knowledge, making it learnable and generalizable. Extensive experiments on three real-world datasets show that GCARec outperforms state-of-the-art baselines.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Bayer, I., He, X., Kanagal, B., Rendle, S.: A generic coordinate descent framework for learning from implicit feedback. In: WWW, pp. 1341–1350 (2017) Bayer, I., He, X., Kanagal, B., Rendle, S.: A generic coordinate descent framework for learning from implicit feedback. In: WWW, pp. 1341–1350 (2017)
3.
Zurück zum Zitat Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607 (2020) Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607 (2020)
4.
Zurück zum Zitat Chen, W., et al.: Semi-supervised user profiling with heterogeneous graph attention networks. In: IJCAI, vol. 19, pp. 2116–2122 (2019) Chen, W., et al.: Semi-supervised user profiling with heterogeneous graph attention networks. In: IJCAI, vol. 19, pp. 2116–2122 (2019)
5.
Zurück zum Zitat Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:​1810.​04805 (2018)
6.
Zurück zum Zitat Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010)
7.
Zurück zum Zitat Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. In: AISTATS, pp. 297–304 (2010) Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. In: AISTATS, pp. 297–304 (2010)
8.
Zurück zum Zitat Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: ICML, pp. 4116–4126. PMLR (2020) Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: ICML, pp. 4116–4126. PMLR (2020)
9.
Zurück zum Zitat He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR, pp. 9729–9738 (2020) He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR, pp. 9729–9738 (2020)
10.
Zurück zum Zitat He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., Wang, M.: LightGCN: simplifying and powering graph convolution network for recommendation. In: SIGIR, pp. 639–648 (2020) He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., Wang, M.: LightGCN: simplifying and powering graph convolution network for recommendation. In: SIGIR, pp. 639–648 (2020)
11.
12.
Zurück zum Zitat Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019) Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:​1909.​11942 (2019)
13.
Zurück zum Zitat Lin, Z., Tian, C., Hou, Y., Zhao, W.X.: Improving graph collaborative filtering with neighborhood-enriched contrastive learning. arXiv preprint arXiv:2202.06200 (2022) Lin, Z., Tian, C., Hou, Y., Zhao, W.X.: Improving graph collaborative filtering with neighborhood-enriched contrastive learning. arXiv preprint arXiv:​2202.​06200 (2022)
14.
Zurück zum Zitat Liu, X., et al.: Self-supervised learning: generative or contrastive. TKDE (2021) Liu, X., et al.: Self-supervised learning: generative or contrastive. TKDE (2021)
15.
Zurück zum Zitat Long, X., et al.: Social recommendation with self-supervised metagraph informax network. In: CIKM, pp. 1160–1169 (2021) Long, X., et al.: Social recommendation with self-supervised metagraph informax network. In: CIKM, pp. 1160–1169 (2021)
16.
Zurück zum Zitat Peng, Z., et al.: Graph representation learning via graphical mutual information maximization. In: Proceedings of The Web Conference 2020, pp. 259–270 (2020) Peng, Z., et al.: Graph representation learning via graphical mutual information maximization. In: Proceedings of The Web Conference 2020, pp. 259–270 (2020)
17.
Zurück zum Zitat Qin, Y., Wang, P., Li, C.: The world is binary: contrastive learning for denoising next basket recommendation. In: SIGIR, pp. 859–868 (2021) Qin, Y., Wang, P., Li, C.: The world is binary: contrastive learning for denoising next basket recommendation. In: SIGIR, pp. 859–868 (2021)
18.
Zurück zum Zitat Qiu, J., et al.: GCC: graph contrastive coding for graph neural network pre-training. In: SIGKDD, pp. 1150–1160 (2020) Qiu, J., et al.: GCC: graph contrastive coding for graph neural network pre-training. In: SIGKDD, pp. 1150–1160 (2020)
19.
Zurück zum Zitat Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012) Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:​1205.​2618 (2012)
20.
Zurück zum Zitat Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. Stat 1050, 20 (2017) Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. Stat 1050, 20 (2017)
21.
Zurück zum Zitat Velickovic, P., Fedus, W., Hamilton, W.L., Liò, P., Bengio, Y., Hjelm, R.D.: Deep graph infomax. ICLR (Poster) 2(3), 4 (2019) Velickovic, P., Fedus, W., Hamilton, W.L., Liò, P., Bengio, Y., Hjelm, R.D.: Deep graph infomax. ICLR (Poster) 2(3), 4 (2019)
22.
Zurück zum Zitat Wang, X., He, X., Cao, Y., Liu, M., Chua, T.S.: KGAT: knowledge graph attention network for recommendation. In: SIGKDD, pp. 950–958 (2019) Wang, X., He, X., Cao, Y., Liu, M., Chua, T.S.: KGAT: knowledge graph attention network for recommendation. In: SIGKDD, pp. 950–958 (2019)
23.
Zurück zum Zitat Wang, X., He, X., Wang, M., Feng, F., Chua, T.S.: Neural graph collaborative filtering. In: SIGIR, pp. 165–174 (2019) Wang, X., He, X., Wang, M., Feng, F., Chua, T.S.: Neural graph collaborative filtering. In: SIGIR, pp. 165–174 (2019)
24.
Zurück zum Zitat Wang, X., Jin, H., Zhang, A., He, X., Xu, T., Chua, T.S.: Disentangled graph collaborative filtering. In: SIGIR, pp. 1001–1010 (2020) Wang, X., Jin, H., Zhang, A., He, X., Xu, T., Chua, T.S.: Disentangled graph collaborative filtering. In: SIGIR, pp. 1001–1010 (2020)
25.
Zurück zum Zitat Wu, J., et al.: Self-supervised graph learning for recommendation. In: SIGIR, pp. 726–735 (2021) Wu, J., et al.: Self-supervised graph learning for recommendation. In: SIGIR, pp. 726–735 (2021)
26.
Zurück zum Zitat Wu, L., et al.: A neural influence diffusion model for social recommendation. In: SIGIR, pp. 235–244 (2019) Wu, L., et al.: A neural influence diffusion model for social recommendation. In: SIGIR, pp. 235–244 (2019)
27.
Zurück zum Zitat Wu, M., Zhuang, C., Mosse, M., Yamins, D., Goodman, N.: On mutual information in contrastive learning for visual representations. arXiv preprint arXiv:2005.13149 (2020) Wu, M., Zhuang, C., Mosse, M., Yamins, D., Goodman, N.: On mutual information in contrastive learning for visual representations. arXiv preprint arXiv:​2005.​13149 (2020)
28.
Zurück zum Zitat Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733–3742 (2018) Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733–3742 (2018)
29.
Zurück zum Zitat Xia, X., Yin, H., Yu, J., Wang, Q., Cui, L., Zhang, X.: Self-supervised hypergraph convolutional networks for session-based recommendation. In: AAAI, vol. 35, pp. 4503–4511 (2021) Xia, X., Yin, H., Yu, J., Wang, Q., Cui, L., Zhang, X.: Self-supervised hypergraph convolutional networks for session-based recommendation. In: AAAI, vol. 35, pp. 4503–4511 (2021)
30.
Zurück zum Zitat Xiao, T., Wang, X., Efros, A.A., Darrell, T.: What should not be contrastive in contrastive learning. arXiv preprint arXiv:2008.05659 (2020) Xiao, T., Wang, X., Efros, A.A., Darrell, T.: What should not be contrastive in contrastive learning. arXiv preprint arXiv:​2008.​05659 (2020)
31.
Zurück zum Zitat Yao, T., et al.: Self-supervised learning for large-scale item recommendations. In: CIKM, pp. 4321–4330 (2021) Yao, T., et al.: Self-supervised learning for large-scale item recommendations. In: CIKM, pp. 4321–4330 (2021)
32.
Zurück zum Zitat You, Y., et al.: Graph contrastive learning with augmentations. In: NIPS, vol. 33, pp. 5812–5823 (2020) You, Y., et al.: Graph contrastive learning with augmentations. In: NIPS, vol. 33, pp. 5812–5823 (2020)
33.
Zurück zum Zitat Yu, J., Yin, H., Gao, M., Xia, X., Zhang, X., Viet Hung, N.Q.: Socially-aware self-supervised tri-training for recommendation. In: SIGKDD, pp. 2084–2092 (2021) Yu, J., Yin, H., Gao, M., Xia, X., Zhang, X., Viet Hung, N.Q.: Socially-aware self-supervised tri-training for recommendation. In: SIGKDD, pp. 2084–2092 (2021)
34.
Zurück zum Zitat Zang, T., Zhu, Y., Liu, H., Zhang, R., Yu, J.: A survey on cross-domain recommendation: taxonomies, methods, and future directions. arXiv preprint arXiv:2108.03357 (2021) Zang, T., Zhu, Y., Liu, H., Zhang, R., Yu, J.: A survey on cross-domain recommendation: taxonomies, methods, and future directions. arXiv preprint arXiv:​2108.​03357 (2021)
35.
Zurück zum Zitat Zbontar, J., Jing, L., Misra, I., LeCun, Y., Deny, S.: Barlow twins: self-supervised learning via redundancy reduction. In: ICML, pp. 12310–12320. PMLR (2021) Zbontar, J., Jing, L., Misra, I., LeCun, Y., Deny, S.: Barlow twins: self-supervised learning via redundancy reduction. In: ICML, pp. 12310–12320. PMLR (2021)
36.
Zurück zum Zitat Zhou, K., et al.: S3-Rec: self-supervised learning for sequential recommendation with mutual information maximization. In: CIKM, pp. 1893–1902 (2020) Zhou, K., et al.: S3-Rec: self-supervised learning for sequential recommendation with mutual information maximization. In: CIKM, pp. 1893–1902 (2020)
Metadaten
Titel
Graph Contrastive Learning with Adaptive Augmentation for Recommendation
verfasst von
Mengyuan Jing
Yanmin Zhu
Tianzi Zang
Jiadi Yu
Feilong Tang
Copyright-Jahr
2023
DOI
https://doi.org/10.1007/978-3-031-26387-3_36

Premium Partner