Skip to main content
Erschienen in: Human-centric Computing and Information Sciences 1/2015

Open Access 01.12.2015 | Research

Personalized fitting recommendation based on support vector regression

verfasst von: Weimin Li, Xunfeng Li, Mengke Yao, Jiulei Jiang, Qun Jin

Erschienen in: Human-centric Computing and Information Sciences | Ausgabe 1/2015

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Collaborative filtering (CF) is a popular method for the personalized recommendation. Almost all of the existing CF methods rely only on the rating data while ignoring some important implicit information in non-rating properties for users and items, which has a significant impact on the preference. In this study, considering that the average rating of users and items has a certain stability, we firstly propose a personalized fitting pattern to predict missing ratings based on the similarity score set, which combines both the user-based and item-based CF. In order to further reduce the prediction error, we use the non-rating attributes, such as a user’s age, gender and occupation, and an item’s release date and price. Moreover, we present the deviation adjustment method based on the support vector regression. Experimental results on MovieLens dataset show that our proposed algorithms can increase the accuracy of recommendation versus the traditional CF.

Background

With the rapidly growing information available on the Internet, people have to spend much more time selecting useful information. To solve the information overload problem, recommender systems have emerged. In recent years, recommender systems have widely used in e-commerce and social network to supply users with personalized information [1]. Collaborative filtering is one of the most successful techniques for its simplicity and efficiency, and it is a good complementary technology to the content-based filtering [27]. Its key process is to find similar users for the target user, or similar items for the predicted item. However, there still exist some inherent problems to be addressed and solved, such as accuracy, data sparsity, cold start and scalability.
There is an important phenomenon that the average rating of users and items has shown a certain stability in a certain time period, which helps us predict the score become possibility. In order to improve the quality of recommendation, various improved approaches, such as Singular value decomposition (SVD) [8], Biparite network [9, 10] and Random walk [11], were introduced to collaborative filtering.
However, all these methods ignore many latent users and items features. For example, in the MovieLens datasets, we can easily find that most students would prefer to the fantasy movies, and the popularity of comedy movies far surpasses drama. Under this condition, rating data is not functional to recommend a suitable film to users. These approaches, which are almost the same as the traditional similarity-based collaborative filtering, only consider the rating data. Instead, a few important issues are not involved in the above reference. In this study, we firstly find the similarity set of users and items. Most methods consider the user or item independently. We construct the user similarity set and item similarity set, combine them and model the linear relation between the total trusty set and precision results. Especially, all methods are improved to give better recommendation, but don’t take the errors of forecasting into consideration. In this paper, we find that there are some relations between the errors of forecasting and the features of users and items, and try to establish an error feedback mechanism to improve the recommendation.
Considering that the user and item information may be a key factor in the appropriate recommendation, we use not only the average rating, but also the non-rating attributes [12], such as a user’s age, gender and occupation, and an item’s release date and price. By using this information, we can model a dynamic deviation adjustment based on support vector regression (SVR) [13]. The objective is to find the relation between the non-rating attributes and precision errors. Then we can adjust the precision errors and improve the recommendation effectively by making use of the information of users’ and items’characteristic [14].
In this paper, we make efforts to exploit a new score prediction method. We design a personalized fitting pattern by using a training set which comes from the similarity score set with regard to the target user and target item. Particularly, we use the non-rating features which include both the user’s and the item’s features to further lower the residual error using SVR. The related experiments show that our proposed approach is more effective than both the traditional user-based CF and item-based CF.
The rest of this paper is organized as follows. “Related work” discusses the related work of Collaborative filtering and support vector machine (SVM). In “Personalized fitting”, we present our design of a personalized fitting pattern by using a training set, and we use the non-rating features. Deviation adjustment by SVR is described in “Deviation adjustment by support vector regression”. We verify our method using the MovieLens datasets, and some comparisons are discussed in “Experiments”. Finally, we make conclusions and outlook the future work in “Conclusion”.
Collaborative filtering (CF) is the most mature and popular method in recommender systems for its effective and simplification. There are many CF-based recommendation systems developed in the academy and industry that are often based on the assumption that the target user will prefer the items with the similar preferences of other users. Collaborative filtering can be divided into two categories: Memory-based CF and Model-based CF. In Memory-based CF, recommended items are those that were preferred by the users who share the similar preferences as the target user (User-based CF), or those that are similar to the other items preferred by the target user (Item-based CF). They are also called similarity-based CF since finding the most similar users or items is of great importance. Commonly used similarity measures include consine similarity [15], adjust Cosine similarity and Pearson correlation coefficient [16].
Up to now, many improved similarity approaches have been proposed to improve the quality of the recommendation. Breese [17] found that common behavior on less popular items can better reflect users’ preferences, and proposed interests partition function to adjust the similarity method. To overcome the data sparse problem in collaborative filtering, structural similarity between users [18] and objects based on bipartite network model [10] were proposed.
Model-based collaborative filtering including probabilistic model [19], Bayesian Model [20], factorization model [21] and latent class models [22] use the ratings data to perform training. SVD uses the low level rating matrix to predict the true score matrix. Memory-based CF is very simple and easy to implement, and model-based CF shows the great advantage in expansibility and flexibility. Most of all, the recommendation systems in E-commerce, such as Amazon books and Ebay shopping recommender system, are using traditional collaborative filtering methods.
There are many efforts to make in order to improve the personalized recommendation system [23, 24]. Tagging is an important information for personalized recommendation. Zhang et al. proposed an integrated diffusion-based algorithm by making use of both the user-item relations and the collaborative tagging information [25]. Shepitsen et al. presented a personalization algorithm for recommendation in folksonomies, which relies on hierarchical tag clusters [26]. Song et al. modeled a user’s adoption pattern as an information flow network for a recommendation system. The authors proposed an early adoption based information flow (EABIF) network by comparing the timestamps when users access documents, and a topic-sensitive early adoption based information propagation (TEABIF) network according to the topics of the documents users accessed [27].
Though there are many methods to improve the accuracy of recommendation, all methods did not use the similarity set of users and items for personalized recommendation. Furthermore, the prediction errors are inevitable and deviation adjustment is not involved to improve accuracy in these methods.

Personalized fitting

In this section, we define the similarity score set, and then outline the Personalized Fitting (PF) framework.
Let \(U = \{ u_{1} ,u_{2} ,u_{3} , \ldots ,u_{M} \}\) be the set of users, and \(S = \{ s_{1} ,s_{2} ,s_{3} , \ldots ,s_{N} \}\) be the set of items in the recommender system. We assume \(r_{m,n}\) is the rating given to item \(s_{n} \in S\) by the user \(u_{m} \in U\). The history scoring record is presented with a \(M \times N\) matrix as shown in Table 1. All those ratings in Table 1 represent the user historical behaviors.
Table 1
A history record matrix example
 
s1
s2
s3
s4
s5
u1
1
2
4
4
1
u2
 
3
3
 
 
u3
5
1
 
3
4
um
2
4
1
rm,n
5
um
2
 
3
5
2

Similarity measurement

There are many methods used to compute the similarity between the users and items in the collaborative recommender systems. The most popular methods are Cosin-based and Person correlation coefficient, which are all based on the rating matrix. In Cosine-based method, two users \(u_{m}\) and \(u_{n}\) are treated as two rating vectors that both reviewed. Therefore, the similarity is equal to the two vectors’ cosine power as follows.
$$sim\left( {u_{m} ,u_{n} } \right) = cos\left( {\overrightarrow {{u_{m} }} ,\overrightarrow {{u_{n} }} } \right) = \frac{{\mathop \sum \nolimits_{{s_{i} \in C_{m,n} }} r_{m,i} r_{n,i} }}{{\sqrt {\mathop \sum \nolimits_{{s_{i} \in C_{m,n} }} r_{m,i}^{2} } \sqrt {\mathop \sum \nolimits_{{s_{i} \in C_{m,n} }} r_{n,i}^{2} } }}$$
(1)
The rating similarity can also be measured by Person correlation coefficient, which can be given as follows.
$$sim\left( {u_{m} ,u_{n} } \right) = \frac{{|\mathop \sum \nolimits_{{s_{i} \in R_{m} \mathop \cap \nolimits R_{n} }} \left( {r_{m,i} - \overline{{u_{m} }} } \right)\left( {r_{n,i} - \overline{{u_{n} }} } \right)|}}{{\sqrt {\mathop \sum \nolimits_{{s_{i} \in R_{m} \mathop \cap \nolimits R_{n} }} \left( {r_{m,i} - \overline{{u_{m} }} } \right)^{2} } \sqrt {\mathop \sum \nolimits_{{s_{i} \in R_{m} \mathop \cap \nolimits R_{n} }} \left( {r_{n,i} - \overline{{u_{n} }} } \right)^{2} } }}$$
(2)
where \(R_{m} (R_{n} )\) is the set of records rated by \(u_{m} (u_{n} )\). The correlation \({\text{corr}}(u_{m} ,u_{n} )\) between \(u_{m}\) and \(u_{n}\) is computed on the records \(C_{m,n} = R_{m} \mathop \cap \nolimits R_{n}\) rated by both \(u_{m}\) and \(u_{n}\), and \(\overline{{u_{m} }}\) and \(\overline{{{\text{u}}_{n} }}\) indicates the average scores of \(u_{m}\) and \(u_{n}\) on all records of \(R_{m}\) and \(R_{n}\), respectively. We can also calculate the similarity between items \(s_{m}\) and \(s_{n}\) using the same principle as the user’s similarity. All the similarity degree in this paper uses this Pearson Correlation Coefficient.

Similarily score set

In similarity-based collaborative recommender systems, k-nearest users or items must be chosen as their trustworthy neighbors firstly. Similar to the similarity-based collaborative filtering method, our main purpose is also to predict the rating \(\hat{r}_{m,n}\) of the target user \(u_{m} \in U\) for the target item \(s_{n} \in S\) that he/she has not known yet. But our predicting method is based on the Similarity Score Set. Therefore, we should firstly find similar user set of the target user and similar item set of the target item, and then base on these definitions Similarity Score (SS) Set will be illustrated. Here we give these three definitions.
Definition 1
Similar Users Set (SU) of a target user \(u_{m}\) is \(k\) users that are the most similar with target user \(u_{m}\), which suggests that similar user’s influence on his/her behavior is reliable.
$${\text{SU}}\left( {u_{m} ,k} \right) = \{ u_{i} \in U|sim\left( {u_{m} ,u_{i} } \right){\text{ranked in the top k with the user similartiy}}\}$$
(3)
Definition 2
Similar items set (SI) of a target item \(s_{i}\). Is \(k\) items that are the most similar to target item \(s_{n}\). That is to say those items may share the similar popularity.
$$SI\left( {s_{m} ,k} \right) = \{ s_{j} \in S|sim\left( {s_{n} ,s_{j} } \right) {\text{ranked in the top k with the item similartiy}}\}$$
(4)
Definition 3
Similarity score set (SS) of a target user \(u_{m}\) and a target item \(s_{n}\) are those rating records which belong to the existing ratings of similar users set (SU) to the similar items set (SI).
$$SS\left({u_{m},s_{n},k_{u},k_{s}} \right) = \left\{{\left({u_{i},s_{j}} \right)|u_{i} SU\left({u_{m},k_{u}} \right)\mathop \cap \nolimits s_{j} \in SI\left({s_{n},k_{s}} \right)\mathop \cap \nolimits r_{i,j} \ne \emptyset} \right\}$$
(5)
where \(k_{u}\) and \(k_{s}\) are the similar user size and similar item size regulation parameters, respectively. \({\text{SU}}\left( {u_{m} ,k_{u} } \right)\) is the \(k_{u}\) nearest neighborhoods users of \(u_{m}\), and \({\text{SI}}(s_{n} ,k_{s} )\) is the \(k_{s}\) nearest neighborhoods items of \(s_{n}\). In this paper, in order to improve the precision of forecasting, we proceed a further optimization on training data and construct the similarity score set. The Similarity Score Set is a reliable training set in predicting rating since it takes the advantages of both the user-based and item-based collaborative filtering approaches.

Personalized fitting

To predict rating \(\hat{r}_{m,n}\), there exists a linear relationship between the rating \(r_{m,n}\) and the average rating of both \(u_{m}\) and \(s_{n}\). The Similarity Score Set \({\text{S}}S(u_{m} ,s_{n} ,k_{u} ,k_{s} )\) can be easily obtained when the adjusting parameters \(k_{u}\) and \(k_{s}\) are given. Here we suppose a tuple \((u_{i} ,s_{j} ) \in SS\left( {u_{m} ,s_{n} ,k_{u} ,k_{s} } \right)\), then we define the Personalized Fitting triples as \(\delta_{k} \left( {\overline{{u_{i} }} ,\overline{{s_{j} }} ,r_{i,j} } \right)\), where \(\overline{{u_{j} }} = (\frac{1}{{\left| {R_{i} } \right|}})\mathop \sum \limits_{{s_{j} \in R_{i} }} r_{i,j}\) and \(R_{i} = \{ s_{j} \in S|r_{i,j} \ne \emptyset \}\);\(\overline{{S_{j} }} = \left( {\frac{1}{{\left| {T_{j} } \right|}}} \right)\mathop \sum \limits_{{u_{i} \in R_{j} }} r_{i,j} {\text{and }}T_{j} = \left\{ {u_{j} \in U |r_{i,j} \ne \emptyset } \right\}\) and \(0 < k \le \left| {SS\left( {u_{m} ,s_{n} ,k_{u} ,k_{s} } \right)} \right|\). In order to simplify the later descriptions, we generalize the Personalized Fitting triples as \(\delta_{k} \left( {x_{k} ,y_{k} ,z_{k} } \right)\). \(x_{k} ,y_{k} ,z_{k}\) satisfy Eq. (6), and to get the best result is equal to adjust the parameters \(\lambda_{m}\) and \(\mu_{n}\). Loss function describes the proximity between the predicted value and the true value under different parameter. The adjusting parameters \(\lambda_{m}\) and \(\mu_{n}\) can be obtained by means of minimizing the following loss function.
$$Los\left( {\lambda_{m} ,\mu_{n} } \right) = \mathop \sum \limits_{k} \left( {\lambda_{m} x_{k} + \mu_{n} y_{k} - z_{k} } \right)^{2}$$
(6)
In general, the least square and gradient descent [28] can be used to minimize Eq. (6), but the gradient descent method can get higher precision in the shortest time.
$$\left\{ {\begin{array}{*{20}c} {\frac{{\partial_{Los} }}{{\partial_{{\lambda_{m} }} }} = 2\mathop \sum \nolimits \left( {\lambda_{m} x_{k} + \mu_{n} y_{k} - z_{k} } \right)x_{k} } \\ {\frac{{\partial_{Los} }}{{\partial_{{\mu_{n} }} }} = 2\mathop \sum \nolimits \left( {\lambda_{m} x_{k} + \mu_{n} y_{k} - z_{k} } \right)y_{k} } \\ \end{array} } \right.$$
(7)
In this paper, firstly, we take the derivatives with respect to parameters \(\lambda_{m}\) and \(\mu_{n}\). Then, according to the gradient descent method, we should update the parameters along the gradient descent direction. Therefore, the recursion formulas can be given as follows.
$$\left\{ {\begin{array}{*{20}c} {\lambda_{m} = \lambda_{m} - \theta \frac{\partial Los}{{\partial \lambda_{m} }}} \\ {\mu_{n} = \mu_{n} - \theta \frac{\partial Los}{{\partial \mu_{m} }}} \\ \end{array} } \right.$$
(8)
where the learning rate θ is assigned 0.001 generally, and parameters \(\lambda_{m}\) and \(\mu_{n}\) are obtained by the gradient descent method as shown in Algorithm 1. In this study, predicting the rating of the target user \(u_{m}\) to the item \(s_{n}\) can be expressed as Eq. (9).
$$\hat{r}_{m,n} = \lambda_{m} \overline{{u_{m} }} + \mu_{n} \overline{{s_{n} }}$$
(9)
https://static-content.springer.com/image/art%3A10.1186%2Fs13673-015-0041-2/MediaObjects/13673_2015_41_Figa_HTML.gif
Personalized Fitting (PF) algorithm (shown in Algorithm 2) not only considers both user-based and item-based collaborative filtering, but also utilizes the average of user’s rating and item’s rating since they are important indicators of preferences in recommender systems
https://static-content.springer.com/image/art%3A10.1186%2Fs13673-015-0041-2/MediaObjects/13673_2015_41_Figb_HTML.gif
.

Deviation adjustment by support vector regression

In the previous section, we considered that the rating \(r_{m,n}\) only depends on the average rating \(\bar{u}_{m}\) and \(\bar{S}_{n}\). When we use the linear fitting model to describe their relationship, the predicted rating \(\hat{r}_{m,n}\) can be obtained by the traditional CF or our proposed PF (Experiments using Personalized Fitting have shown better accuracy compared with the traditional similarity-based CF in “Experiments”). However, some other non-rating factors (such as a user’s age, gender, occupation and an item’s category, brand, etc.) are also important to affect \(\hat{r}_{m,n}\). We assume a certain relationship exists between the residual \((r_{m,n} - \hat{r}_{m,n} )\) and those non-rating factors. We propose a deviation adjustment method based on SVM to further improve the rating prediction accuracy.

SVR

Support Vector Machine was proposed by Vapnik [29, 30]. It is a universal machine learning algorithm based on solid statistical theory foundation. SVM learning algorithms are based on the structural risk minimization, which is different from the empirical error minimization used in the traditional machine learning algorithms. What’s more, SVM has shown its great advantage in small sample learning, nonlinear classification and poor generalization ability [31]. Moreover, Vapnik expanded SVM to regression forecasting by adding ε-insensitive loss function, and built the SVR theory [32, 33]. The essence of SVR is the convex quadratic optimization problem. Its discriminant function is given as follows.
$$f\left( x \right) = \mathop \sum \limits_{i = 1}^{t} (a_{i}^{*} - a_{i} )K(x,x_{i} ) + b$$
(10)
where \(K\left( {x,x_{i} } \right)\) is the kernel function. Selecting the kernel function is the core step of SVR in solving the nonlinear regression problems. The basic idea is to transfer the original space into a new space \(\varPhi (x)\) by the kernel function. The new space is linearly separable. We only need to ensure the function like \(K\left( {u,v} \right) = < \varPhi \left( u \right),\varPhi \left( v \right) >\) because it only uses the dot product in our new training model.

Deviation adjustment model

In this model, the residual (\(r_{m,n} - \hat{r}_{m,n}\)) comes from the user features \(P_{m}\) and the item features \(Q_{n}\). Here, we selected the user’s feature to include: gender (\({\text{P}}_{u}^{1}\)), age (\({\text{P}}_{u}^{2}\)), and occupation (\({\text{P}}_{u}^{3}\)), and the item’s feature to include: released year (\({\text{Q}}_{s}^{1}\)) and genre (\({\text{Q}}_{s}^{2}\)). Table 2 illustrates an example.
Table 2
A simple example training data for our model
\(u_{m}\)
\(s_{n}\)
Deviation
\({\text{P}}_{u}^{1}\)
\({\text{P}}_{u}^{2}\)
\({\text{P}}_{u}^{3}\)
\({\text{Q}}_{s}^{1}\)
\({\text{Q}}_{s}^{2}\)
1
15
0.35256
0
1
20
2
9
45
157
−0.24610
0
1
15
3
6
108
50
0.54350
1
2
4
1
4
204
123
1.23811
0
3
11
0
13
335
1001
−1.47634
1
1
8
2
4
….
….
….
A stable SVR model can handle the user features and the item features to a deviation. We use the deviation to adjust the predicted rating to gain better results.

Experiments

Data set and setup

Our experiments were performed on a real and classical movie dataset MovieLens (http://​www.​movielens.​umn.​edu), which were collected by the GroupLens Research Project of Minnesota University. The main dataset includes 100,000 ratings from 943 users who reviewed 1,682 movies. The dataset also contains a script program which has split it into two parts: training set (80%) and testing set (20%). The biggest advantage of MovieLens dataset is that we can easily extract non-rating features for our work. In our experiments, we have extracted important user features which contain gender, occupation and age, the movie features which contain categories and release year.
In our work, we merely consider the precision as the only evaluation criterion to compare our method with the user-based CF and item-based CF. Mean absolute error (MAE) and Root mean squared error (RMSE) are the most widely used indicators in collaborative filtering. MAE and RMSE are defined as follows.
$${\text{MAE}} = \frac{{\mathop \sum \nolimits_{i = 1}^{N} \left| {r_{m,n} - \hat{r}_{m,n} } \right|}}{N}$$
(11)
$${\text{RMSE}} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{N} \left| {r_{m,n} - \hat{r}_{m,n} } \right|^{2} }}{N}}$$
(12)
where \(r_{m,n}\) is the actual rating that user \(u_{m}\) gave to item \(s_{n}\) in the testing set, while \(\hat{r}_{m,n}\) is the corresponding prediction rating calculated by certain methods using the training set. \(N\) stands for the number of testing records. Therefore, the smaller MAE and RMSE are, the better prediction quality of related method is.

Experimental result and analysis

In this paper, in order to validate the effectiveness of our PF algorithm, we compare our PF method with the traditional collaborative filtering methods, including the user-base CF and item-based CF. In our PF algorithm, the similar users adjusting parameter \(k_{u}\) and similar items adjusting parameter \(k_{s}\) have a great influence on the experimental results. In order to intuitively reveal experimental results, we let \(k_{u} = k_{s} = k\), where \({\text{k }}\) is the number of the nearest neighbor (users or items) used in the traditional collaborative filtering. The experimental results are shown in Figures 1 and 2.
As Figures 1 and 2 show, the item-base CF obviously has a lower value of MAE and RMSE than that of the user-based CF. However, the proposed method PF considers both user-based and item-based collaborative filtering, and utilizes the average of user’s rating and item’s rating. The results gained from the PF algorithm are better than the above two. When the neighborhood size is growing, the value of MAE is decreased. However, when the neighborhood size reaches to 80, we can see the value of MAE becomes bigger due to the over-fitting problem. The value of RMSE shows its stability when the neighborhood size is bigger than 50. And we can see that the PF algorithm curve is always below the user-based CF and the item-based CF, indicating that our proposed method achieves higher prediction accuracy. This is because that the average rating of users and items in our model are similarity.
We selected Radica basis function (RBF) as the kernel function for the SVM model and invoked the SVM toolbox in MATLAB 2008 directly. In order to verify the effectiveness of our deviation adjustment mechanism, we selected \(k_{u} = k_{s} = k = 80\), for \(k\) = 80 is the best neighborhood size with regard to our PF algorithm. We also implemented the deviation adjustment by the BP neural network (BPNN). Detailed comparisons on MAE are shown in Table 3. From Table 3, we can see the deviation adjustment mechanism has indeed further lowered the value of MAE with both the traditional collaborative filtering and our PF. Compared with the used-based CF and item-based CF, the proposed method PF has higher predictive accuracy. Moreover, SVM model is better than BPNN model for the reason that SVR could reach global optimal.
Table 3
Comparing different deviation adjustment models with SVR by MAE
Method
BASIC
BPNN
SVR
User-based CF
0.760
0.756
0.750
Item-based CF
0.761
0.755
0.749
PF
0.745
0.743
0.740

Conclusion

In this paper, a personalized fitting recommendation approach has been proposed by combining the characteristic of the user’s and item’s similarity score set. We use it to predict the missing rating, and the results show great stability over a period of time for the reason of the average rating of users and items. Most traditional collaborative filtering methods only considered the rating data in the rating matrix. However, in this paper, we have further presented a deviation adjustment mechanism based on the SVR by using the non-rating features. The experimental results have revealed that the non-rating attributes contributed to reduce the prediction errors.
In our future works, we will consider timeliness and optimize our algorithms to gain better personalized recommendation results.

Authors’ contributions

WL proposed the idea of personalized fitting and drafted the manuscript. XL and MY designed and developed the algorithms of personalized fitting. MY conducted the experimental data collection and analysis. JJ designed the deviation adjustment. QJ supervised the work and critically revised the manuscript. All authors read and approved the final manuscript.

Acknowledgements

This work is supported by Natural Science Foundation of Ningxia under Grant No. NZ12212.

Compliance with ethical standard

Competing interestThe authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Literatur
1.
Zurück zum Zitat Bank M, Franke J (2010) Social networks as data source for recommendation systems. E-commerce and web technologies. Lecture Notes in Business Information Processing, vol 61, pp 49–60 Bank M, Franke J (2010) Social networks as data source for recommendation systems. E-commerce and web technologies. Lecture Notes in Business Information Processing, vol 61, pp 49–60
2.
Zurück zum Zitat Su X, Khoshgoftaar TM (2009) A survey of collaborative filtering techniques. Adv Artif Intel 2009:19 Article ID 421425 Su X, Khoshgoftaar TM (2009) A survey of collaborative filtering techniques. Adv Artif Intel 2009:19 Article ID 421425
3.
Zurück zum Zitat Yu K, Schwaighofer A, Tresp V, Xiaowei Xu, Kriegel H-P (2004) Probabilistic memory-based collaborative filtering. IEEE Trans Knowl Data Eng 16(1):56–69CrossRef Yu K, Schwaighofer A, Tresp V, Xiaowei Xu, Kriegel H-P (2004) Probabilistic memory-based collaborative filtering. IEEE Trans Knowl Data Eng 16(1):56–69CrossRef
4.
Zurück zum Zitat Jeong B, Lee J, Cho H (2010) Improving memory-based collaborative filtering via similarity updating and prediction modulation. Inf Sci 180(5):602–612CrossRef Jeong B, Lee J, Cho H (2010) Improving memory-based collaborative filtering via similarity updating and prediction modulation. Inf Sci 180(5):602–612CrossRef
5.
Zurück zum Zitat Pennock DM, Horvitz E, Lawrence S, Giles CL (2000) Collaborative filtering by personality diagnosis: a hybrid memory- and model-based approach. In: Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pp 473–480 Pennock DM, Horvitz E, Lawrence S, Giles CL (2000) Collaborative filtering by personality diagnosis: a hybrid memory- and model-based approach. In: Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pp 473–480
6.
Zurück zum Zitat Sarwar B, Karypis G, Konstan J, Riedl J (2001) Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th international conference on World Wide Web (WWW ‘01). ACM, New York, pp 285–295 Sarwar B, Karypis G, Konstan J, Riedl J (2001) Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th international conference on World Wide Web (WWW ‘01). ACM, New York, pp 285–295
7.
Zurück zum Zitat Konstan JA, Miller BN, Maltz D (1997) GroupLens: applying collaborative filtering to usenet news. Comm ACM 40(3):77–87CrossRef Konstan JA, Miller BN, Maltz D (1997) GroupLens: applying collaborative filtering to usenet news. Comm ACM 40(3):77–87CrossRef
8.
Zurück zum Zitat Brand M (2003) Fast online SVD revisions for lightweight recommender systems. In: Proceedings of the 2003 SIAM International Conference on Data Mining, pp 37–46 Brand M (2003) Fast online SVD revisions for lightweight recommender systems. In: Proceedings of the 2003 SIAM International Conference on Data Mining, pp 37–46
9.
Zurück zum Zitat Zhou T, Ren J, Medo M, Zhang YC (2007) Bipartite network projection and personal recommendation. Phys Rev E 76(4):046115CrossRef Zhou T, Ren J, Medo M, Zhang YC (2007) Bipartite network projection and personal recommendation. Phys Rev E 76(4):046115CrossRef
10.
Zurück zum Zitat Liu JG, Zhou T, Xuan ZG, Che HA, Wang BH, Zhang YC (2010) Degree correlation of bipartite network on personalized recommendation. Int J Mod Phys C 21(01):137–147CrossRef Liu JG, Zhou T, Xuan ZG, Che HA, Wang BH, Zhang YC (2010) Degree correlation of bipartite network on personalized recommendation. Int J Mod Phys C 21(01):137–147CrossRef
11.
Zurück zum Zitat Fouss F, Pirotte A, Renders JM, Saerens M (2007) Random-walk computation of similarities between Nodes of a graph with application to collaboratvie recommendation. IEEE Trans Knowl Data Eng 19(3):355–369CrossRef Fouss F, Pirotte A, Renders JM, Saerens M (2007) Random-walk computation of similarities between Nodes of a graph with application to collaboratvie recommendation. IEEE Trans Knowl Data Eng 19(3):355–369CrossRef
12.
Zurück zum Zitat Hallinan B, Striphas T (2014) Recommended for you: the Netflix Prize and the production of algorithmic culture. New Media Society, pp 1461444814538646 Hallinan B, Striphas T (2014) Recommended for you: the Netflix Prize and the production of algorithmic culture. New Media Society, pp 1461444814538646
13.
Zurück zum Zitat Li J, Wang X, Sun K, Ren J (2014) Recommendation algorithm with support vector regression based on user characteristics. In: Proceedings of the 9th international symposium on linear drives for industry applications, vol 3. Springer, Berlin, Heidelberg, pp 455–462 Li J, Wang X, Sun K, Ren J (2014) Recommendation algorithm with support vector regression based on user characteristics. In: Proceedings of the 9th international symposium on linear drives for industry applications, vol 3. Springer, Berlin, Heidelberg, pp 455–462
14.
Zurück zum Zitat Pinheiro A, Cappelli C, Maciel C (2014) Increasing information auditability for social network users. In: Human interface and the management of information. Information and knowledge design and evaluation. Springer, pp 536–547 Pinheiro A, Cappelli C, Maciel C (2014) Increasing information auditability for social network users. In: Human interface and the management of information. Information and knowledge design and evaluation. Springer, pp 536–547
15.
Zurück zum Zitat Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 17(6):734–749CrossRef Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 17(6):734–749CrossRef
16.
Zurück zum Zitat Ahlgren P, Jarneving B, Rousseau R (2003) Requirements for a cocitation similarity measure, with special reference to Pearson’s correlation coefficient. J Am Soc Inform Sci Technol 54(6):550–560CrossRef Ahlgren P, Jarneving B, Rousseau R (2003) Requirements for a cocitation similarity measure, with special reference to Pearson’s correlation coefficient. J Am Soc Inform Sci Technol 54(6):550–560CrossRef
17.
Zurück zum Zitat Breese JS, Heckerman D, Kadie C (1998) Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings 14th Conference on Uncertainty in Artifical Intelligence (UAI), pp 43–52 Breese JS, Heckerman D, Kadie C (1998) Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings 14th Conference on Uncertainty in Artifical Intelligence (UAI), pp 43–52
18.
Zurück zum Zitat Jeh G, Widom J (2002) SimRank: a measure of structural-context similarity. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 538–543 Jeh G, Widom J (2002) SimRank: a measure of structural-context similarity. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 538–543
19.
Zurück zum Zitat Popescul R, Ungar LH (2001) Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments. In: Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI-2001), Morgan Kaufmann, San Francisco, pp 437–444 Popescul R, Ungar LH (2001) Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments. In: Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI-2001), Morgan Kaufmann, San Francisco, pp 437–444
20.
Zurück zum Zitat Miyahara K, Pazzani MJ (2000) Collaborative filtering with the simple Bayesian classifier. In: Proceedings of the 6th Pacific International Conference on Artificial Intelligence. Berlin, pp 679–689 Miyahara K, Pazzani MJ (2000) Collaborative filtering with the simple Bayesian classifier. In: Proceedings of the 6th Pacific International Conference on Artificial Intelligence. Berlin, pp 679–689
21.
Zurück zum Zitat Koren Y (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 426–434 Koren Y (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 426–434
22.
Zurück zum Zitat Hofmannn T, Puzicha J (1999) Latent class Models for Collaborative filtering, IJCA’99: Proceedings of the 16th International Joint Conference on Artificial Intelligence, pp 688–693 Hofmannn T, Puzicha J (1999) Latent class Models for Collaborative filtering, IJCA’99: Proceedings of the 16th International Joint Conference on Artificial Intelligence, pp 688–693
23.
Zurück zum Zitat Wu YH, Chen YC, Chen ALP (2001) Enabling personalized recommendation on the Web based on user interests and behaviors, Research Issues in Data Engineering, 2001. In: Proceedings. Eleventh International Workshop on, pp 17–24 Wu YH, Chen YC, Chen ALP (2001) Enabling personalized recommendation on the Web based on user interests and behaviors, Research Issues in Data Engineering, 2001. In: Proceedings. Eleventh International Workshop on, pp 17–24
24.
Zurück zum Zitat Hung L (2005) A personalized recommendation system based on product taxonomy for one-to-one marketing online. Expert Syst Appl 29(2):383–392CrossRef Hung L (2005) A personalized recommendation system based on product taxonomy for one-to-one marketing online. Expert Syst Appl 29(2):383–392CrossRef
25.
Zurück zum Zitat Zhang ZK, Zhou T, Zhang YC (2010) Personalized recommendation via integrated diffusion on user–item–tag tripartite graphs. Physica A 389(1):179–186CrossRef Zhang ZK, Zhou T, Zhang YC (2010) Personalized recommendation via integrated diffusion on user–item–tag tripartite graphs. Physica A 389(1):179–186CrossRef
26.
Zurück zum Zitat Shepitsen A, Gemmell J, Mobasher B, Burke R (2008) Personalized recommendation in social tagging systems using hierarchical clustering. In: Proceedings of the 2008 ACM conference on Recommender systems (RecSys ‘08). ACM, New York, pp 259–266 Shepitsen A, Gemmell J, Mobasher B, Burke R (2008) Personalized recommendation in social tagging systems using hierarchical clustering. In: Proceedings of the 2008 ACM conference on Recommender systems (RecSys ‘08). ACM, New York, pp 259–266
27.
Zurück zum Zitat Song X, BL Tseng, CY Lin, MT Sun (2006) Personalized recommendation driven by information flow. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR ‘06). ACM, New York, pp 509–516 Song X, BL Tseng, CY Lin, MT Sun (2006) Personalized recommendation driven by information flow. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR ‘06). ACM, New York, pp 509–516
28.
Zurück zum Zitat Baird L, Moore AW (1999) Gradient descent for general reinforcement learning. Adv Neural Inform Process Syst 11:968–974 Baird L, Moore AW (1999) Gradient descent for general reinforcement learning. Adv Neural Inform Process Syst 11:968–974
29.
Zurück zum Zitat Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273 Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273
30.
Zurück zum Zitat Vapnik V, Golowich SE, Smola AJ (1996) Support vector method for function approximation, regression estimation and signal processing. Advances in neural information processing systems. In: Proceedings of the 1996 Neural Information Processing Systems Conference NIPS 1996, pp 281–287 Vapnik V, Golowich SE, Smola AJ (1996) Support vector method for function approximation, regression estimation and signal processing. Advances in neural information processing systems. In: Proceedings of the 1996 Neural Information Processing Systems Conference NIPS 1996, pp 281–287
31.
Zurück zum Zitat Xiao XF, Xu LH, Zhu Y (2012) Short-term traffic flow prediction based on SVM. J Guangxi Normal Univ Nat Sci Edn 30(4):13–17 Xiao XF, Xu LH, Zhu Y (2012) Short-term traffic flow prediction based on SVM. J Guangxi Normal Univ Nat Sci Edn 30(4):13–17
32.
Zurück zum Zitat Basak D, Pal S, Patranabis DC (2007) Support vector regression. Neural Inform Process Lett Rev 11(10):203–224 Basak D, Pal S, Patranabis DC (2007) Support vector regression. Neural Inform Process Lett Rev 11(10):203–224
33.
Zurück zum Zitat Wu CH, Ho JM, Lee DT (2004) Travel-time prediction with support vector regression. IEEE Trans Intel Transport Syst 5(4):276–281CrossRef Wu CH, Ho JM, Lee DT (2004) Travel-time prediction with support vector regression. IEEE Trans Intel Transport Syst 5(4):276–281CrossRef
Metadaten
Titel
Personalized fitting recommendation based on support vector regression
verfasst von
Weimin Li
Xunfeng Li
Mengke Yao
Jiulei Jiang
Qun Jin
Publikationsdatum
01.12.2015
Verlag
Springer Berlin Heidelberg
Erschienen in
Human-centric Computing and Information Sciences / Ausgabe 1/2015
Elektronische ISSN: 2192-1962
DOI
https://doi.org/10.1186/s13673-015-0041-2

Weitere Artikel der Ausgabe 1/2015

Human-centric Computing and Information Sciences 1/2015 Zur Ausgabe

Premium Partner