Skip to main content
Top
Published in: International Journal of Data Science and Analytics 1/2021

17-05-2021 | Regular Paper

Enhancing personalized modeling via weighted and adversarial learning

Authors: Wei Du, Xintao Wu

Published in: International Journal of Data Science and Analytics | Issue 1/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The data generation sources are increasing in the past few years, such as mobile devices, embedded sensors, various intelligent equipment and so forth. These increasing data sources push the deployment of deep learning models in a distributed manner. However, the traditional distributed deep learning is to build a global model over all collected data and may overlook specific components which are of vital importance to individual users. In this paper, we propose an adversarial learning framework that allows an individual user to build a personalized model. Our framework consists of two stages, including efficient similar data selection from other users and adversarial training. Instead of selecting similar data by computing hand-designed similarity metrics, we train an auto-encoder and a generative adversarial network (GAN) on individual user’s data and use them to request similar data from other users. To further improve the personalized model performance, we develop two approaches that combine the requested data and user’s own data to build the personalized model. The first approach is that we apply weighted learning to capture the different importance of the requested data. The second approach is that we apply adversarial training to minimize the distribution discrepancy between the requested data and user’s own data. Experimental results demonstrate the effectiveness of the proposed framework.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference He, K., Zhang, X., Ren, S., and Sun, J.: “Deep residual learning for image recognition,” in IEEE CVPR, (2016) He, K., Zhang, X., Ren, S., and Sun, J.: “Deep residual learning for image recognition,” in IEEE CVPR, (2016)
2.
go back to reference Lai, S., Xu, L., Liu, K., and Zhao, J.: “Recurrent convolutional neural networks for text classification,” in AAAI, (2015) Lai, S., Xu, L., Liu, K., and Zhao, J.: “Recurrent convolutional neural networks for text classification,” in AAAI, (2015)
3.
go back to reference Dong, X., Yu, L., Wu, Z., Sun, Y., Yuan, L., and Zhang, F.: “A hybrid collaborative filtering model with deep structure for recommender systems,” in AAAI, (2017) Dong, X., Yu, L., Wu, Z., Sun, Y., Yuan, L., and Zhang, F.: “A hybrid collaborative filtering model with deep structure for recommender systems,” in AAAI, (2017)
4.
go back to reference Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Senior, A., Tucker, P., Yang, K., Le , Q.V. etal.: “Large scale distributed deep networks,” in NeurIPS, (2012) Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Senior, A., Tucker, P., Yang, K., Le , Q.V. etal.: “Large scale distributed deep networks,” in NeurIPS, (2012)
5.
go back to reference Park, D.H., Kim, H.K., Choi, I.Y., and Kim, J.K.: “A literature review and classification of recommender systems research,” Expert systems with applications, (2012) Park, D.H., Kim, H.K., Choi, I.Y., and Kim, J.K.: “A literature review and classification of recommender systems research,” Expert systems with applications, (2012)
6.
go back to reference Cheng, Y., Wang, F., Zhang, P., and Hu, J.: “Risk prediction with electronic health records: A deep learning approach,” in SDM, (2016) Cheng, Y., Wang, F., Zhang, P., and Hu, J.: “Risk prediction with electronic health records: A deep learning approach,” in SDM, (2016)
7.
go back to reference Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.: “Generative adversarial nets,” in NeurIPS, (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.: “Generative adversarial nets,” in NeurIPS, (2014)
8.
go back to reference Chilimbi, T., Suzue, Y., Apacible, J., and Kalyanaraman, K.: “Project adam: Building an efficient and scalable deep learning training system,” in OSDI, (2014) Chilimbi, T., Suzue, Y., Apacible, J., and Kalyanaraman, K.: “Project adam: Building an efficient and scalable deep learning training system,” in OSDI, (2014)
9.
go back to reference Wen, W., Xu, C., Yan, F., Wu, C., Wang, Y., Chen, Y., and Li, H.: “Terngrad: Ternary gradients to reduce communication in distributed deep learning,” in NeurIPS, (2017) Wen, W., Xu, C., Yan, F., Wu, C., Wang, Y., Chen, Y., and Li, H.: “Terngrad: Ternary gradients to reduce communication in distributed deep learning,” in NeurIPS, (2017)
10.
go back to reference Chen, C.-Y., Choi, J., Brand, D., Agrawal, A., Zhang, W., and Gopalakrishnan, K.: “Adacomp: Adaptive residual gradient compression for data-parallel distributed training,” in AAAI, (2018) Chen, C.-Y., Choi, J., Brand, D., Agrawal, A., Zhang, W., and Gopalakrishnan, K.: “Adacomp: Adaptive residual gradient compression for data-parallel distributed training,” in AAAI, (2018)
11.
go back to reference Wang, S., Pi, A., Zhao, X., and Zhou, X.: “Scalable distributed dl training: Batching communication and computation,” in AAAI, (2019) Wang, S., Pi, A., Zhao, X., and Zhou, X.: “Scalable distributed dl training: Batching communication and computation,” in AAAI, (2019)
12.
go back to reference Wangni, J., Wang, J., Liu, J., and Zhang, T.: “Gradient sparsification for communication-efficient distributed optimization,” in NeurIPS, (2018) Wangni, J., Wang, J., Liu, J., and Zhang, T.: “Gradient sparsification for communication-efficient distributed optimization,” in NeurIPS, (2018)
13.
go back to reference McMahan, H.B., Moore, E., Ramage, D., Hampson, S. etal.: “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, (2016) McMahan, H.B., Moore, E., Ramage, D., Hampson, S. etal.: “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, (2016)
14.
go back to reference Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., and Seth, K.: “Practical secure aggregation for privacy-preserving machine learning,” in ACM CCS, (2017) Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., and Seth, K.: “Practical secure aggregation for privacy-preserving machine learning,” in ACM CCS, (2017)
15.
go back to reference Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A.S.: “Federated multi-task learning,” in NeurIPS, (2017) Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A.S.: “Federated multi-task learning,” in NeurIPS, (2017)
16.
go back to reference Che, C., Xiao, C., Liang, J., Jin, B., Zho, J., and Wang, F.: “An rnn architecture with dynamic temporal matching for personalized predictions of parkinson’s disease,” in SDM, (2017) Che, C., Xiao, C., Liang, J., Jin, B., Zho, J., and Wang, F.: “An rnn architecture with dynamic temporal matching for personalized predictions of parkinson’s disease,” in SDM, (2017)
17.
go back to reference Suo, Q., Ma, F., Yuan, Y., Huai, M., Zhong, W., Zhang, A., and Gao, J.: “Personalized disease prediction using a cnn-based similarity learning method,” in IEEE BIBM, (2017) Suo, Q., Ma, F., Yuan, Y., Huai, M., Zhong, W., Zhang, A., and Gao, J.: “Personalized disease prediction using a cnn-based similarity learning method,” in IEEE BIBM, (2017)
18.
go back to reference Choi, E., Bahadori, M.T., Searles, E., Coffey, C., Thompson, M., Bost, J., Tejedor-Sojo, J., and Sun, J.: “Multi-layer representation learning for medical concepts,” in ACM KDD, (2016) Choi, E., Bahadori, M.T., Searles, E., Coffey, C., Thompson, M., Bost, J., Tejedor-Sojo, J., and Sun, J.: “Multi-layer representation learning for medical concepts,” in ACM KDD, (2016)
19.
go back to reference Huai, M., Miao, C., Suo, Q., Li, Y., Gao, J. and Zhang, A.: “Uncorrelated patient similarity learning,” in SDM, (2018) Huai, M., Miao, C., Suo, Q., Li, Y., Gao, J. and Zhang, A.: “Uncorrelated patient similarity learning,” in SDM, (2018)
20.
go back to reference Wang, F., Sun, J., Ebadollahi, S.: Composite distance metric integration by leveraging multiple experts’ inputs and its application in patient similarity assessment. Stat Anal Data Mining ASA Data Sci J 5(1), 54–69 (2012)MathSciNetCrossRef Wang, F., Sun, J., Ebadollahi, S.: Composite distance metric integration by leveraging multiple experts’ inputs and its application in patient similarity assessment. Stat Anal Data Mining ASA Data Sci J 5(1), 54–69 (2012)MathSciNetCrossRef
21.
go back to reference Li, M., and Wang, L.: “A survey on personalized news recommendation technology,” IEEE Access, (2019) Li, M., and Wang, L.: “A survey on personalized news recommendation technology,” IEEE Access, (2019)
22.
go back to reference Luo, F., Ranzi, G., Wang, X., Dong, Z.Y.: Social information filtering-based electricity retail plan recommender system for smart grid end users. IEEE Trans Smart Grid 10(1), 95–104 (2017)CrossRef Luo, F., Ranzi, G., Wang, X., Dong, Z.Y.: Social information filtering-based electricity retail plan recommender system for smart grid end users. IEEE Trans Smart Grid 10(1), 95–104 (2017)CrossRef
23.
go back to reference Kouki, P., Fakhraei, S., Foulds, J., Eirinaki, M., and Getoor, L.: “Hyper: A flexible and extensible probabilistic framework for hybrid recommender systems,” in ACM RecSys, (2015) Kouki, P., Fakhraei, S., Foulds, J., Eirinaki, M., and Getoor, L.: “Hyper: A flexible and extensible probabilistic framework for hybrid recommender systems,” in ACM RecSys, (2015)
24.
go back to reference Hu, L., Cao, L., Wang, S., Xu, G., Cao, J., and Gu, Z.: “Diversifying personalized recommendation with user-session context.” in IJCAI, (2017), pp. 1858–1864 Hu, L., Cao, L., Wang, S., Xu, G., Cao, J., and Gu, Z.: “Diversifying personalized recommendation with user-session context.” in IJCAI, (2017), pp. 1858–1864
25.
go back to reference Yu, Z., Lian, J., Mahmoody, A., Liu, G., and Xie, X.: “Adaptive user modeling with long and short-term preferences for personalized recommendation.” in IJCAI, (2019), pp. 4213–4219 Yu, Z., Lian, J., Mahmoody, A., Liu, G., and Xie, X.: “Adaptive user modeling with long and short-term preferences for personalized recommendation.” in IJCAI, (2019), pp. 4213–4219
26.
go back to reference Bengio, Y., Courville, A., and Vincent, P.: “Representation learning: A review and new perspectives,” IEEE TPAMI, (2013) Bengio, Y., Courville, A., and Vincent, P.: “Representation learning: A review and new perspectives,” IEEE TPAMI, (2013)
27.
go back to reference Tzeng, E., Hoffman, J., Darrell, T., and Saenko, K.: “Simultaneous deep transfer across domains and tasks,” in IEEE CVPR, (2015) Tzeng, E., Hoffman, J., Darrell, T., and Saenko, K.: “Simultaneous deep transfer across domains and tasks,” in IEEE CVPR, (2015)
28.
go back to reference Liu, A.H., Liu, Y.-C., Yeh, Y.-Y., and Wang, Y.-C.F.: “A unified feature disentangler for multi-domain image translation and manipulation,” in NeurIPS, (2018) Liu, A.H., Liu, Y.-C., Yeh, Y.-Y., and Wang, Y.-C.F.: “A unified feature disentangler for multi-domain image translation and manipulation,” in NeurIPS, (2018)
29.
go back to reference Gupta, A., Devin, C., Liu, Y., Abbeel, P., and Levine, S.: “Learning invariant feature spaces to transfer skills with reinforcement learning,” in ICLR, (2017) Gupta, A., Devin, C., Liu, Y., Abbeel, P., and Levine, S.: “Learning invariant feature spaces to transfer skills with reinforcement learning,” in ICLR, (2017)
30.
go back to reference Misra, I., Shrivastava, A., Gupta, A., and Hebert, M.: “Cross-stitch networks for multi-task learning,” in IEEE CVPR, (2016) Misra, I., Shrivastava, A., Gupta, A., and Hebert, M.: “Cross-stitch networks for multi-task learning,” in IEEE CVPR, (2016)
31.
go back to reference Bouchacourt, D., Tomioka, R., and Nowozin, S.: “Multi-level variational autoencoder: Learning disentangled representations from grouped observations,” in AAAI, (2018) Bouchacourt, D., Tomioka, R., and Nowozin, S.: “Multi-level variational autoencoder: Learning disentangled representations from grouped observations,” in AAAI, (2018)
32.
go back to reference Narayanaswamy, S., Paige, T.B., Vande Meent, J.-W., Desmaison, A., Goodman, N., Kohli, P., Wood, F., and Torr, P.: “Learning disentangled representations with semi-supervised deep generative models,” in NeurIPS, (2017) Narayanaswamy, S., Paige, T.B., Vande Meent, J.-W., Desmaison, A., Goodman, N., Kohli, P., Wood, F., and Torr, P.: “Learning disentangled representations with semi-supervised deep generative models,” in NeurIPS, (2017)
33.
go back to reference Zadrozny, B.: “Learning and evaluating classifiers under sample selection bias,” in Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004, ser. ACM International Conference Proceeding Series, C.E. Brodley, Ed., vol.69.ACM, 2004. [Online]. Available: https://doi.org/10.1145/1015330.1015425 Zadrozny, B.: “Learning and evaluating classifiers under sample selection bias,” in Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004, ser. ACM International Conference Proceeding Series, C.E. Brodley, Ed., vol.69.ACM, 2004. [Online]. Available: https://​doi.​org/​10.​1145/​1015330.​1015425
34.
go back to reference Wen, J., Yu, C.-N., and Greiner, R.: “Robust learning under uncertain test distributions: Relating covariate shift to model misspecification.” in ICML, (2014), pp. 631–639 Wen, J., Yu, C.-N., and Greiner, R.: “Robust learning under uncertain test distributions: Relating covariate shift to model misspecification.” in ICML, (2014), pp. 631–639
35.
go back to reference Khodabandeh, M., Vahdat, A., Ranjbar, M., and Macready, W.G.: “A robust learning approach to domain adaptive object detection,” in Proceedings of the IEEE International Conference on Computer Vision, (2019), pp. 480–490 Khodabandeh, M., Vahdat, A., Ranjbar, M., and Macready, W.G.: “A robust learning approach to domain adaptive object detection,” in Proceedings of the IEEE International Conference on Computer Vision, (2019), pp. 480–490
36.
go back to reference Wang, X., and Schneider, J.: “Flexible transfer learning under support and model shift,” in Advances in Neural Information Processing Systems, 2014, pp. 1898–1906 Wang, X., and Schneider, J.: “Flexible transfer learning under support and model shift,” in Advances in Neural Information Processing Systems, 2014, pp. 1898–1906
37.
go back to reference Huang, J., Gretton, A., Borgwardt, K., Schölkopf, B., and Smola, A.J.: “Correcting sample selection bias by unlabeled data,” in Advances in neural information processing systems, (2007), pp. 601–608 Huang, J., Gretton, A., Borgwardt, K., Schölkopf, B., and Smola, A.J.: “Correcting sample selection bias by unlabeled data,” in Advances in neural information processing systems, (2007), pp. 601–608
38.
go back to reference Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., and Smola, A.J.: “A kernel method for the two-sample-problem,” in Advances in neural information processing systems, (2007), pp. 513–520 Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., and Smola, A.J.: “A kernel method for the two-sample-problem,” in Advances in neural information processing systems, (2007), pp. 513–520
39.
go back to reference Schonlau, M., DuMouchel, W., Ju, W.-H., Karr, A.F., Theusan, M., Vardi, Y., etal.:, “Computer intrusion: Detecting masquerades,” Statistical science, (2001) Schonlau, M., DuMouchel, W., Ju, W.-H., Karr, A.F., Theusan, M., Vardi, Y., etal.:, “Computer intrusion: Detecting masquerades,” Statistical science, (2001)
40.
go back to reference Phan, N., Ebrahimi, J., Kil, D., Piniewski, B., and Dou, D.: “Topic-aware physical activity propagation in a health social network,” IEEE intelligent systems, (2015) Phan, N., Ebrahimi, J., Kil, D., Piniewski, B., and Dou, D.: “Topic-aware physical activity propagation in a health social network,” IEEE intelligent systems, (2015)
42.
go back to reference Song, C., Ristenpart, T., and Shmatikov, V.: “Machine learning models that remember too much,” in ACM CCS, (2017) Song, C., Ristenpart, T., and Shmatikov, V.: “Machine learning models that remember too much,” in ACM CCS, (2017)
43.
go back to reference Phan, N., Wang, Y., Wu, X., and Dou, D.: “Differential privacy preservation for deep auto-encoders: an application of human behavior prediction,” in AAAI, (2016) Phan, N., Wang, Y., Wu, X., and Dou, D.: “Differential privacy preservation for deep auto-encoders: an application of human behavior prediction,” in AAAI, (2016)
44.
go back to reference Xie, L., Lin, K., Wang, S., Wang, F., and Zhou, J.: “Differentially private generative adversarial network,” CoRR, (2018) Xie, L., Lin, K., Wang, S., Wang, F., and Zhou, J.: “Differentially private generative adversarial network,” CoRR, (2018)
45.
go back to reference Duchi, J.C., Jordan, M.I., and Wainwright, M.J.: “Local privacy and statistical minimax rates,” in IEEE FOCS, (2013) Duchi, J.C., Jordan, M.I., and Wainwright, M.J.: “Local privacy and statistical minimax rates,” in IEEE FOCS, (2013)
46.
go back to reference Settles, B.: Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, Tech. Rep. (2009) Settles, B.: Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, Tech. Rep. (2009)
47.
go back to reference Du, W., and Wu, X.: “Advpl: Adversarial personalized learning,” in DSAA, (2020) Du, W., and Wu, X.: “Advpl: Adversarial personalized learning,” in DSAA, (2020)
Metadata
Title
Enhancing personalized modeling via weighted and adversarial learning
Authors
Wei Du
Xintao Wu
Publication date
17-05-2021
Publisher
Springer International Publishing
Published in
International Journal of Data Science and Analytics / Issue 1/2021
Print ISSN: 2364-415X
Electronic ISSN: 2364-4168
DOI
https://doi.org/10.1007/s41060-021-00263-3

Other articles of this Issue 1/2021

International Journal of Data Science and Analytics 1/2021 Go to the issue

Premium Partner