skip to main content
10.1145/3289600.3290975acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article

A Simple Convolutional Generative Network for Next Item Recommendation

Published:30 January 2019Publication History

ABSTRACT

Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback.

References

  1. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016).Google ScholarGoogle Scholar
  2. Guy Blanc and Steffen Rendle. 2017. Adaptive Sampled Softmax with Kernel Based Sampling. arXiv preprint arXiv:1712.00527 (2017).Google ScholarGoogle Scholar
  3. Sotirios P. Chatzis, Panayiotis Christodoulou, and Andreas S. Andreou. 2017. Recurrent Latent Variable Networks for Session-Based Recommendation. In Proceedings of the 2nd Workshop on Deep Learning for Recommender Systems (DLRS 2017). ACM, 38--45. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. 2016. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915 (2016).Google ScholarGoogle Scholar
  5. Chen Cheng, Haiqin Yang, Michael R Lyu, and Irwin King. 2013. Where You Like to Go Next: Successive Point-of-Interest Recommendation.. In IJCAI . Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Zhiyong Cheng, Jialie Shen, Lei Zhu, Mohan Kankanhalli, and Liqiang Nie. 2017. Exploiting Music Play Sequence for Music Recommendation. In IJCAI . Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Qiang Cui, Shu Wu, Yan Huang, and Liang Wang. 2017. A Hierarchical Contextual Attention-based GRU Network for Sequential Recommendation. arXiv preprint arXiv:1711.05114 (2017).Google ScholarGoogle Scholar
  8. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. arXiv preprint arXiv:1705.03122 (2017).Google ScholarGoogle Scholar
  9. Youyang Gu, Tao Lei, Regina Barzilay, and Tommi S Jaakkola. 2016. Learning to refine text based recommendations.. In EMNLP. 2103--2108.Google ScholarGoogle Scholar
  10. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In CVPR. 770--778.Google ScholarGoogle Scholar
  11. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Identity mappings in deep residual networks. In ECCV. Springer, 630--645.Google ScholarGoogle Scholar
  12. Xiangnan He and Tat-Seng Chua. 2017. Neural factorization machines for sparse predictive analytics. In SIGIR. ACM, 355--364. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018. Adversarial personalized ranking for recommendation. In SIGIR. ACM, 355--364. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Balázs Hidasi and Alexandros Karatzoglou. 2017. Recurrent Neural Networks with Top-k Gains for Session-based Recommendations. arXiv preprint arXiv:1706.03847 (2017).Google ScholarGoogle Scholar
  15. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015).Google ScholarGoogle Scholar
  16. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. 448--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007 (2014).Google ScholarGoogle Scholar
  18. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099 (2016).Google ScholarGoogle Scholar
  19. Hugo Larochelle and Iain Murray. 2011. The neural autoregressive distribution estimator. AISTATS. 29--37.Google ScholarGoogle Scholar
  20. Jing Li, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Tao Lian, and Jun Ma. 2017. Neural Attentive Session-based Recommendation. In CIKM. ACM, 1419--1428. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In ICML. 807--814. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016a. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016).Google ScholarGoogle Scholar
  23. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016b. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759 (2016).Google ScholarGoogle Scholar
  24. A"aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. 2016c. Conditional image generation with pixelcnn decoders. In NIPS. Curran Associates Inc., 4797--4805. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Massimo Quadrana, Alexandros Karatzoglou, Balázs Hidasi, and Paolo Cremonesi. 2017. Personalizing Session-based Recommendations with Hierarchical Recurrent Neural Networks. arXiv preprint arXiv:1706.04148 (2017).Google ScholarGoogle Scholar
  26. Tom Sercu and Vaibhava Goel. 2016. Dense Prediction on Sequences with Time-Dilated Convolutions for Speech Recognition. arXiv preprint arXiv:1611.09288 (2016).Google ScholarGoogle Scholar
  27. Elena Smirnova and Flavian Vasile. 2017. Contextual Sequence Modeling for Recommendation with Recurrent Neural Networks. arXiv preprint arXiv:1706.07684 (2017).Google ScholarGoogle Scholar
  28. Yong Kiam Tan, Xinxing Xu, and Yong Liu. 2016. Improved recurrent neural networks for session-based recommendations. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 17--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jiaxi Tang and Ke Wang. 2018. Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding. In ACM International Conference on Web Search and Data Mining . Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Trinh Xuan Tuan and Tu Minh Phuong. 2017. 3D Convolutional Networks for Session-based Recommendation with Content Features. In RecSys . ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Fisher Yu and Vladlen Koltun. 2015. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015).Google ScholarGoogle Scholar
  32. Fajie Yuan, Guibing Guo, Joemon M Jose, Long Chen, Haitao Yu, and Weinan Zhang. 2016. Lambdafm: learning optimal ranking with factorization machines using lambda surrogates. In CIKM. ACM, 227--236. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Fajie Yuan, Guibing Guo, Joemon M Jose, Long Chen, Haitao Yu, and Weinan Zhang. 2017. Boostfm: Boosted factorization machines for top-n feature-based recommendation. In IUI. ACM, 45--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Fajie Yuan, Xin Xin, Xiangnan He, Guibing Guo, Weinan Zhang, Tat-Seng Chua, and Joemon Jose. 2018. fBGD: Learning Embeddings From Positive Unlabeled Data with BGD. UAI.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    WSDM '19: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining
    January 2019
    874 pages
    ISBN:9781450359405
    DOI:10.1145/3289600

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 30 January 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    WSDM '19 Paper Acceptance Rate84of511submissions,16%Overall Acceptance Rate498of2,863submissions,17%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader