Introduction
Motivation
Contributions
Personal Recommendation in Consumer Electronics
Related Work
Federated Learning
Personal Recommendation
Discussion
Category | Work | Limits |
---|---|---|
Federated Learning | Fu et al. [25] | Test the solution on a simple MNIST data. |
Zhou et al. [26] | Requires high time computing during the encryption | |
Yin et al. [27] | phase. | |
Personal Recommendation | Li et al. [28] | Need to share data for multiple user recommendation. |
Solairaj et al. [29] | ||
Swaminathan et al. [30] | Lack of advanced deep learning models in the training. | |
Ma et al. [31] | ||
Walek and Fajmon [18] |
FLT-PR: Federated Learning-Based Transformers for Personal Recommendation
Principle
Contrastive Learning for Feature Embedding
-
X is the set of all user-item interactions in the recommender system.
-
U be the set of users, and I be the set of items.
-
E be the embedding space with dimension d.
-
\(f_{\theta }: U \cup I \rightarrow E\) be the function that maps users and items to their corresponding embeddings in the space E, parameterized by \(\theta \in \mathbb {R}^{d}\).
Angular Contrastive Loss
Training
Federated Learning
-
a. Sending trained local models — The system initialization involves the establishment of public parameters, key generation, and data exchange among different system roles. The trusted authority takes charge of generating various codes essential for the transmission and verification of model data. Consequently, the server receives the local models that consumers have individually trained, encompassing the models’ architectures, weights, and the IDs of consumers within each cluster. These elements are homomorphically encrypted prior to transmission.
-
b. Checking model integrity — In our designed federated learning system, the trusted authority plays a crucial role in maintaining the integrity and security of the recommendation process. Acting as the guardian of trust, this authority oversees the preservation, signing, and issuance of digital certificates. These certificates serve as a testament to the authenticity of each locally uploaded model to the central server. Furthermore, the trusted authority ensures the accuracy of these models through rigorous verification processes, thereby upholding the reliability and effectiveness of the designed federated learning system.
-
c. Model aggregation — Two types of aggregation are used. The first one is the aggregation of the local model of each cluster of consumers \(W^{(local)_{C_j}}\). The second one is the aggregation of all models to find the global model \(W^{(global)}\). The detailed formulas are given as follows:and,$$\begin{aligned} W^{(local)_{C_j}} = \sum _{u_i^{C_j}} \frac{|d_i|}{|\sum _{d_i}^{D^{C_j}}|} W_i^{(local)} \end{aligned}$$(3)where \(C_j\) is the \(j^{th}\) cluster of consumers. \(\mathcal {U}=\{u_1, u_2...u_k\}\) is the set of k users. \(D=\{d_1, d_2...d_k\}\) is the set of k datasets of traffic flow, one for each user in \(\mathcal {U}\). \(W^{(local)}_i\) is the weights of the local model of the consumer \(u_i\).$$\begin{aligned} W^{(global)} = \sum _{u_i}^{\mathcal {U}} \frac{|d_i|}{|\sum _{d_i}^{D}|} W_i^{(local)} \end{aligned}$$(4)
-
d. Sharing updated global model — Subsequently, the server transmits the aggregated results to all consumers. When consumers are influenced by similar individuals, such as friends, the aggregated local model for each cluster is employed for recommendations. In other cases, the aggregated global model is used.
Performance Evaluation
Experimental Settings
Dataset | Metrics | KGAT | KAUR | RippleNet | CFKG | MKR | FLT-PR |
---|---|---|---|---|---|---|---|
Recall@10 | 0.17 | 0.18 | 0.12 | 0.16 | 0.14 | 0.19 | |
NDCG@10 | 0.27 | 0.28 | 0.20 | 0.25 | 0.22 | 0.32 | |
MovieLens-1M | Recall@20 | 0.25 | 0.27 | 0.21 | 0.25 | 0.22 | 0.26 |
NDCG@20 | 0.29 | 0.28 | 0.21 | 0.24 | 0.25 | 0.28 | |
Recall@50 | 0.43 | 0.42 | 0.35 | 0.40 | 0.40 | 0.46 | |
NDCG@50 | 0.32 | 0.31 | 0.24 | 0.31 | 0.27 | 0.35 | |
Recall@10 | 0.19 | 0.17 | 0.07 | 0.13 | 0.10 | 0.18 | |
NDCG@10 | 0.08 | 0.11 | 0.06 | 0.09 | 0.06 | 0.10 | |
Amazon-book | Recall@20 | 0.20 | 0.25 | 0.12 | 0.18 | 0.17 | 0.29 |
NDCG@20 | 0.10 | 0.15 | 0.07 | 0.11 | 0.09 | 0.19 | |
Recall@50 | 0.30 | 0.38 | 0.24 | 0.33 | 0.29 | 0.42 | |
NDCG@50 | 0.12 | 0.17 | 0.11 | 0.14 | 0.12 | 0.21 | |
average | – | 0.22 | 0.25 | 0.17 | 0.22 | 0.19 | 0.27 |