1 Introduction
SKNN
).Q-BASEX
that allows to generate next-POI visit actions recommendations for new POIs (not observed in the training) and even for user states whose contextual conditions have not been observed in the training set. Q-BASEX
is evaluated on two POI-visit trajectories data sets (Rome and Florence) by measuring: the precision of the recommendations; how they match to the user’s expected visit experience; the coverage of suggested relevant items; the diversity of the items suggested to the various users; the recommended items (un)popularity. Q-BASEX
is compared with a next-item nearest neighbour-based recommendation model Session KNN (SKNN
) [10] that in previous studies resulted to be more accurate than other Inverse Reinforcement Learning methods [12, 13].2 Related Work
SKNN
) is a nearest neighbour-based RSs approach that exploit users’ behavioural data logs that are similar (neighbours) to the logs of a target user. GRU4REC
is another popular method used in session-based RSs. It uses a Gated Recurrent Unit (GRU) Recurrent Neural Network and predicts the next action (i.e., next item to purchase) of a target user given information on her past action sequences.3 Next-POI Recommendation with QBASEX
3.1 Data
Dataset | # POIs | # Trajectories | # Features | |||
---|---|---|---|---|---|---|
Context | Content | “Expert” | Behaviour | |||
Florence | 316 | 2110 | 15 | 29 | 9 | 5 |
Rome | 376 | 4340 | 14 | 28 | 9 | 5 |
3.2 User Behaviour Learning
3.3 Clustering Similarly Behaving Users
3.4 Recommendation Generation with QBASEX
Q-BASEX
) is an extension of Q-BASE
[13]. Q-BASE
harnesses the behavioural model of the cluster the user belongs to in order to suggest next-POI visit actions the user should make from her current POI-visit (state s) [12]. The recommended POI-visit actions a are those with the highest \(Q(s,\cdot )\) value in the user current state. However, when users’ observations are limited not all the possible contextual situations in a POI and next POI-visit actions combinations may have been observed in the training set. Hence, Q-BASE
often is not able to generate a full set of top-n recommendations.Q-BASE
is not able to generate the required n recommendations, to ignore the information given by the current context of the user in the state s, and identify the set of states gen(s) that represent a visit to the same POI of state s, but possibly in different contexts. Then, the next POI-visit actions a for which we are able to compute \(Q(s',a)\), for states \(s' \in gen(s)\), are sorted by \(AVG_{s' \in gen(s)} \{ Q(s', a) \}\), and the top scoring actions are recommended. We call this new IRL-based RS Q-BASEX
(conteXt relaXed). In case a full set of recommendations can not be generated even by ignoring the current user context, Q-BASEX
generates recommendations by considering the s predecessor state (if any), hence computing next visit recommendations suited for the previous location of the user; the “previous” state is typically related to the “current” state of the user.Q-BASEX
is the capability to generate recommendations for new unseen POIs, i.e., new venues that have not been visited yet by any user, and therefore are not in the training set. Let \(\phi (a)\) be the feature vector of a, i.e., it is a binary vector containing the same content features modeling a POI in the state model, but here they model the action to move to a POI. Let \(a_n \in A_n\) be a new POI-visit action that has not been previously observed (not in the train set). Given the user’s current state s, by considering the actions for which we are able to compute the value \(Q(s,\cdot )\), we compute the (Jaccard index) similarity \(sim(\phi (a), \phi (a_n))\) between the POI feature vectors associated to an observed (known) visit action \(a \in A_k\) and to the unseen new POI associated to \(a_n\). In order to generate next visit recommendations for new POIs using Q-BASEX
we compute:4 Experimental Study
Q-BASEX
can generate better recommendations than a nearest neighbour baseline (SKNN
). The second hypothesis is that by assigning a test trajectory to a cluster on the base of the behavioural features Q-BASEX
has a better performance than if test trajectories are assigned to a cluster according to content features.4.1 Experimental Strategy
4.2 Baseline Recommendation Techniques
Q-BASEX
with SKNN
[10], which is considered to be a strong state of the art next-item recommendation method [11]. It has shown a better accuracy than another IRL-based model presented in [13].SKNN
recommends the next-item (visit action) to a user by considering her current session (trajectory) and searching for similar sessions (neighbourhood) in the data-set. These are obtained by computing the binary cosine similarity \(c(\zeta , \zeta _i)\) between the current trajectory \(\zeta \) and those in the dataset \(\zeta _i\). Given a set of nearest neighbours \(N_{\zeta }\), then the score of a visit action a can be computed as:SKNN
are those with the highest scores.4.3 Performance Metrics
Model | Top-n | Prec | Rew | Sim | I-Cov | Unique | Pop |
---|---|---|---|---|---|---|---|
Q-BASEX | 1 | 0,10 | 0,44 | 0,10 | 0,33 | 0,36 | 0,79 |
SKNN | 1 | 0,02 | −0,03 | 0,06 | 0,28 | 0,30 | 0,88 |
Q-BASEX | 3 | 0,09 | 0,26 | 0,10 | 0,53 | 0,21 | 0,78 |
SKNN | 3 | 0,06 | 0,00 | 0,08 | 0,38 | 0,14 | 0,92 |
5 Experimental Results
Q-BASEX
, with the SKNN
baseline. We perform a two-tailed paired t-test with significance level of 0.05 in order to assess if there is a significant difference between the best performing model and the other. If a model is significantly better than the other on a specific metric we underscore in the following tables its performance value. The performance of the two compared RSs when behavioural clustering is employed in the Florence data set is reported in Table 2. Q-BASEX
outperforms SKNN
in all the evaluated metrics for top-1 and top-3 recommendations. In particular, Q-BASEX
recommends next-POI visits that are: more precise (high Prec), increase the user’s utility (high Rew
) and closer to the user’s expected experience (high Sim). Moreover, Q-BASEX
is less prone to recommend popular places (lower Pop) and diversifies the POI-visit suggestions among the users (higher Unique and I-Cov).Q-BASEX
is confirmed here for all the metrics (both for top-1 and top-3 recommendations). SKNN
suggests lower accurate next-POI visits (Prec
) that have also lower reward (Rew
), compared to Q-BASEX
. By looking at the metrics Sim, I-Cov, Unique and Pop, we can state that Q-BASEX
suggests less popular (low Pop) next-POI visits that are also more diverse (high Unique) and relevant (low I-cov and Sim).Model | Top-n | Prec | Rew | Sim | I-Cov | Unique | Pop |
---|---|---|---|---|---|---|---|
Q-BASEX | 1 | 0,12 | 0,52 | 0,17 | 0,32 | 0,35 | 0,70 |
SKNN | 1 | 0,00 | −0,07 | 0,12 | 0,30 | 0,34 | 0,79 |
Q-BASEX | 3 | 0,10 | 0,34 | 0,17 | 0,60 | 0,23 | 0,69 |
SKNN | 3 | 0,06 | 0,00 | 0,16 | 0,43 | 0,17 | 0,86 |
Q-BASEX
can better support a visitor in identifying POI visits that are relevant and aligned to the users’ expected experience (high Prec, Sim and Rew) as well as interesting and diverse (high I-Cov, Unique and low Pop).Q-BASEX
(top-3 recommendations) change if, instead of assign a test user’s trajectory to the cluster of similarly behaving users’ trajectories, as we have done in the previous experiment, we assign it to a cluster that contains trajectories with similar content features.City | Features | Prec | Rew | Sim | I-Cov | Unique | Pop |
---|---|---|---|---|---|---|---|
Florence | Behaviour | 0,09 | 0,26 | 0,10 | 0,53 | 0,21 | 0,78 |
Content | 0,08 | 0,04 | 0,10 | 0,54 | 0,23 | 0,79 | |
Rome | Behaviour | 0,10 | 0,34 | 0,17 | 0,60 | 0,23 | 0,69 |
Content | 0,09 | 0,04 | 0,17 | 0,60 | 0,26 | 0,71 |
Q-BASEX
can better accomplish the task of a RSs in the tourism domain, by suggesting items that are relevant for a user, i.e., with high precision, reward and expected POI-visit experience (similarity), as well as by being able to suggest to the whole user base different items that are also novel.6 Conclusion and Future Works
Q-BASEX
that is based on two computational steps: (1) clustering users’ trajectories so that each cluster contains users visits’ trajectories showing similar behaviour; (2) harnessing a behaviour model, learned for each cluster, to recommend to a user for which a partial POI-visit trajectory is known, next-POI visit actions.SKNN
. Our conclusion is that Q-BASEX
can generate recommendations that better match the user’s context and interests, and also offers the best combination of precision and novelty, while making suggestions that are more rewarding for the user. Moreover, Q-BASEX
effectiveness depends significantly on the used clusters of similarly behaving POI-visit trajectories: how the users visit POIs seems even more important than what the users visit.Q-BASEX
and to analyse its fairness in supporting the different users that falls within a cluster, i.e., users with different profiles but treated by Q-BASEX
in the same way.