Skip to main content

2018 | OriginalPaper | Buchkapitel

Venue Prediction for Social Images by Exploiting Rich Temporal Patterns in LBSNs

verfasst von : Jingyuan Chen, Xiangnan He, Xuemeng Song, Hanwang Zhang, Liqiang Nie, Tat-Seng Chua

Erschienen in: MultiMedia Modeling

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Location (or equivalently, “venue”) is a crucial facet of user generated images in social media (aka. social images) to describe the events of people’s daily lives. While many existing works focus on predicting the venue category based on image content, we tackle the grand challenge of predicting the specific venue of a social image. Simply using the visual content of a social image is insufficient for this purpose due its high diversity. In this work, we leverage users’ check-in histories in location-based social networks (LBSNs), which contain rich temporal movement patterns, to complement the limitations of using visual signals alone. In particular, we explore the transition patterns on successive check-ins and periodical patterns on venue categories from users’ check-in behaviors in Foursquare. For example, users tend to check-in to cinemas nearby after having meals at a restaurant (transition patterns), and frequently check-in to churches on every Sunday morning (periodical patterns). To incorporate such rich temporal patterns into the venue prediction process, we propose a generic embedding model that fuses the visual signal from image content and various temporal signal from LBSN check-in histories. We conduct extensive experiments on Instagram social images, demonstrating that by properly leveraging the temporal patterns latent in Foursquare check-ins, we can significantly boost the accuracy of venue prediction.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
In this work, we use point-of-interest (POI), venue, and location interchangeably, which all refer to a specific venue.
 
Literatur
1.
Zurück zum Zitat Avrithis, Y.S., Kalantidis, Y., Tolias, G., Spyrou, E.: Retrieving landmark and non-landmark images from community photo collections. In: MM. ACM (2010) Avrithis, Y.S., Kalantidis, Y., Tolias, G., Spyrou, E.: Retrieving landmark and non-landmark images from community photo collections. In: MM. ACM (2010)
2.
Zurück zum Zitat Cao, S., Snavely, N.: Graph-based discriminative learning for location recognition. IJCV 112(2), 239–254 (2015)MathSciNetCrossRef Cao, S., Snavely, N.: Graph-based discriminative learning for location recognition. IJCV 112(2), 239–254 (2015)MathSciNetCrossRef
3.
Zurück zum Zitat Chen, D.M., Baatz, G., Köser, K., Tsai, S.S., Vedantham, R., Pylvänäinen, T., Roimela, K., Chen, X., Bach, J., Pollefeys, M., Girod, B., Grzeszczuk, R.: City-scale landmark identification on mobile devices. In: CVPR. IEEE (2011) Chen, D.M., Baatz, G., Köser, K., Tsai, S.S., Vedantham, R., Pylvänäinen, T., Roimela, K., Chen, X., Bach, J., Pollefeys, M., Girod, B., Grzeszczuk, R.: City-scale landmark identification on mobile devices. In: CVPR. IEEE (2011)
4.
Zurück zum Zitat Chen, J., Ngo, C.: Deep-based ingredient recognition for cooking recipe retrieval. In: MM. ACM (2016) Chen, J., Ngo, C.: Deep-based ingredient recognition for cooking recipe retrieval. In: MM. ACM (2016)
5.
Zurück zum Zitat Chen, J., Song, X., Nie, L., Wang, X., Zhang, H., Chua, T.: Micro tells macro: predicting the popularity of micro-videos via a transductive model. In: MM, pp. 898–907. ACM (2016) Chen, J., Song, X., Nie, L., Wang, X., Zhang, H., Chua, T.: Micro tells macro: predicting the popularity of micro-videos via a transductive model. In: MM, pp. 898–907. ACM (2016)
6.
Zurück zum Zitat Chen, J., Zhang, H., He, X., Nie, L., Liu, W., Chua, T.: Attentive collaborative filtering: multimedia recommendation with item- and component-level attention. In: SIGIR, pp. 335–344. ACM (2017) Chen, J., Zhang, H., He, X., Nie, L., Liu, W., Chua, T.: Attentive collaborative filtering: multimedia recommendation with item- and component-level attention. In: SIGIR, pp. 335–344. ACM (2017)
7.
Zurück zum Zitat Chen, T., He, X., Kan, M.: Context-aware image Tweet modelling and recommendation. In: MM. ACM (2016) Chen, T., He, X., Kan, M.: Context-aware image Tweet modelling and recommendation. In: MM. ACM (2016)
8.
Zurück zum Zitat Cheng, C., Yang, H., Lyu, M.R., King, I.: Where you like to go next: successive point-of-interest recommendation. In: IJCAI. IJCAI/AAAI (2013) Cheng, C., Yang, H., Lyu, M.R., King, I.: Where you like to go next: successive point-of-interest recommendation. In: IJCAI. IJCAI/AAAI (2013)
9.
Zurück zum Zitat Crandall, D.J., Backstrom, L., Huttenlocher, D.P., Kleinberg, J.M.: Mapping the world’s photos. In: WWW. ACM (2009) Crandall, D.J., Backstrom, L., Huttenlocher, D.P., Kleinberg, J.M.: Mapping the world’s photos. In: WWW. ACM (2009)
10.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F.: Imagenet: a large-scale hierarchical image database. In: CVPR. IEEE (2009) Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F.: Imagenet: a large-scale hierarchical image database. In: CVPR. IEEE (2009)
11.
Zurück zum Zitat Farseev, A., Nie, L., Akbari, M., Chua, T.: Harvesting multiple sources for user profile learning: a big datastudy. In: ICMR. ACM (2015) Farseev, A., Nie, L., Akbari, M., Chua, T.: Harvesting multiple sources for user profile learning: a big datastudy. In: ICMR. ACM (2015)
12.
Zurück zum Zitat Hays, J., Efros, A.A.: IM2GPS: estimating geographic information from a single image. In: CVPR. IEEE Computer Society (2008) Hays, J., Efros, A.A.: IM2GPS: estimating geographic information from a single image. In: CVPR. IEEE Computer Society (2008)
13.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. IEEE Computer Society (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. IEEE Computer Society (2016)
14.
Zurück zum Zitat He, X., Chua, T.-S.: Neural factorization machines for sparse predictive analytics. In: SIGIR, pp. 355–364 (2017) He, X., Chua, T.-S.: Neural factorization machines for sparse predictive analytics. In: SIGIR, pp. 355–364 (2017)
15.
Zurück zum Zitat He, X., Gao, M., Kan, M.-Y., Liu, Y., Sugiyama, K.: Predicting the popularity of web 2.0 items based on user comments. In: SIGIR, pp. 233–242 (2014) He, X., Gao, M., Kan, M.-Y., Liu, Y., Sugiyama, K.: Predicting the popularity of web 2.0 items based on user comments. In: SIGIR, pp. 233–242 (2014)
16.
Zurück zum Zitat He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.-S.: Neural collaborative filtering. In: WWW, pp. 173–182 (2017) He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.-S.: Neural collaborative filtering. In: WWW, pp. 173–182 (2017)
17.
Zurück zum Zitat Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. TOIS 20(4), 422–446 (2002)CrossRef Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. TOIS 20(4), 422–446 (2002)CrossRef
18.
Zurück zum Zitat Li, X., Pham, T.N., Cong, G., Yuan, Q., Li, X., Krishnaswamy, S.: Where you Instagram? Associating your Instagram photos with points of interest. In: CIKM. ACM (2015) Li, X., Pham, T.N., Cong, G., Yuan, Q., Li, X., Krishnaswamy, S.: Where you Instagram? Associating your Instagram photos with points of interest. In: CIKM. ACM (2015)
19.
Zurück zum Zitat Li, Y., Crandall, D.J., Huttenlocher, D.P.: Landmark classification in large-scale image collections. In: ICCV. IEEE (2009) Li, Y., Crandall, D.J., Huttenlocher, D.P.: Landmark classification in large-scale image collections. In: ICCV. IEEE (2009)
20.
Zurück zum Zitat Zhang, H., Shen, F., Liu, W., He, X., Luan, H., Chua, T.-S.: Discrete collaborative filtering. In: SIGIR, pp. 325–334 (2016) Zhang, H., Shen, F., Liu, W., He, X., Luan, H., Chua, T.-S.: Discrete collaborative filtering. In: SIGIR, pp. 325–334 (2016)
21.
Zurück zum Zitat Zhang, J., Nie, L., Wang, X., He, X., Huang, X., Chua, T.: Shorter-is-better: venue category estimation from micro-video. In: MM. ACM (2016) Zhang, J., Nie, L., Wang, X., He, X., Huang, X., Chua, T.: Shorter-is-better: venue category estimation from micro-video. In: MM. ACM (2016)
22.
Zurück zum Zitat Zhou, B., Lapedriza, À., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: NIPS (2014) Zhou, B., Lapedriza, À., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: NIPS (2014)
Metadaten
Titel
Venue Prediction for Social Images by Exploiting Rich Temporal Patterns in LBSNs
verfasst von
Jingyuan Chen
Xiangnan He
Xuemeng Song
Hanwang Zhang
Liqiang Nie
Tat-Seng Chua
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-73600-6_28

Neuer Inhalt