Skip to main content
Erschienen in:
Buchtitelbild

2020 | OriginalPaper | Buchkapitel

Threats to Federated Learning

verfasst von : Lingjuan Lyu, Han Yu, Jun Zhao, Qiang Yang

Erschienen in: Federated Learning

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized approach of training artificial intelligence (AI) models is facing strong challenges. Federated learning (FL) has recently emerged as a promising solution under this new reality. Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries both within and outside of the system to compromise data privacy. It is thus of paramount importance to make FL system designers aware of the implications of future FL algorithm design on privacy-preservation. Currently, there is no survey on this topic. In this chapter, we bridge this important gap in FL literature. By providing a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL: 1) poisoning attacks and 2) inference attacks, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks, and discuss promising future research directions towards more robust privacy preservation in FL.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Abadi, M., et al.: Deep learning with differential privacy. In: CCS, pp. 308–318 (2016) Abadi, M., et al.: Deep learning with differential privacy. In: CCS, pp. 308–318 (2016)
2.
Zurück zum Zitat Agarwal, N., Suresh, A.T., Yu, F.X.X., Kumar, S., McMahan, B.: cpSGD: communication-efficient and differentially-private distributed SGD. In: NeurIPS, pp. 7564–7575 (2018) Agarwal, N., Suresh, A.T., Yu, F.X.X., Kumar, S., McMahan, B.: cpSGD: communication-efficient and differentially-private distributed SGD. In: NeurIPS, pp. 7564–7575 (2018)
3.
Zurück zum Zitat Aono, Y., Hayashi, T., Wang, L., Moriai, S., et al.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018)CrossRef Aono, Y., Hayashi, T., Wang, L., Moriai, S., et al.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018)CrossRef
4.
5.
Zurück zum Zitat Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: ICCS, pp. 16–25 (2006) Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: ICCS, pp. 16–25 (2006)
6.
Zurück zum Zitat Bernstein, J., Zhao, J., Azizzadenesheli, K., Anandkumar, A.: signSGD with majority vote is communication efficient and fault tolerant. CoRR, arXiv:1810.05291 (2018) Bernstein, J., Zhao, J., Azizzadenesheli, K., Anandkumar, A.: signSGD with majority vote is communication efficient and fault tolerant. CoRR, arXiv:​1810.​05291 (2018)
7.
Zurück zum Zitat Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. CoRR, arXiv:1811.12470 (2018) Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. CoRR, arXiv:​1811.​12470 (2018)
8.
Zurück zum Zitat Bhowmick, A., Duchi, J., Freudiger, J., Kapoor, G., Rogers, R.: Protection against reconstruction and its applications in private federated learning. CoRR, arXiv:1812.00984 (2018) Bhowmick, A., Duchi, J., Freudiger, J., Kapoor, G., Rogers, R.: Protection against reconstruction and its applications in private federated learning. CoRR, arXiv:​1812.​00984 (2018)
9.
Zurück zum Zitat Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. In: ACML, pp. 97–112 (2011) Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. In: ACML, pp. 97–112 (2011)
10.
11.
Zurück zum Zitat Blanchard, P., Guerraoui, R., Stainer, J., et al.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: NeurIPS, pp. 119–129 (2017) Blanchard, P., Guerraoui, R., Stainer, J., et al.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: NeurIPS, pp. 119–129 (2017)
12.
Zurück zum Zitat Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: CCS, pp. 1175–1191 (2017) Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: CCS, pp. 1175–1191 (2017)
13.
Zurück zum Zitat Chang, H., Shejwalkar, V., Shokri, R., Houmansadr, A.: Cronus: robust and heterogeneous collaborative learning with black-box knowledge transfer. CoRR, arXiv:1912.11279 (2019) Chang, H., Shejwalkar, V., Shokri, R., Houmansadr, A.: Cronus: robust and heterogeneous collaborative learning with black-box knowledge transfer. CoRR, arXiv:​1912.​11279 (2019)
14.
Zurück zum Zitat Chen, L., Wang, H., Charles, Z., Papailiopoulos, D.: Draco: Byzantine-resilient distributed training via redundant gradients. CoRR, arXiv:1803.09877 (2018) Chen, L., Wang, H., Charles, Z., Papailiopoulos, D.: Draco: Byzantine-resilient distributed training via redundant gradients. CoRR, arXiv:​1803.​09877 (2018)
15.
Zurück zum Zitat Chen, Y., Su, L., Xu, J.: Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proc. ACM Meas. Anal. Comput. Syst. 1(2), 44 (2017) Chen, Y., Su, L., Xu, J.: Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proc. ACM Meas. Anal. Comput. Syst. 1(2), 44 (2017)
16.
Zurück zum Zitat Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: CCS, pp. 1322–1333 (2015) Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: CCS, pp. 1322–1333 (2015)
17.
18.
Zurück zum Zitat Gao, D., Liu, Y., Huang, A., Ju, C., Yu, H., Yang, Q.: Privacy-preserving heterogeneous federated transfer learning. In: IEEE BigData (2019) Gao, D., Liu, Y., Huang, A., Ju, C., Yu, H., Yang, Q.: Privacy-preserving heterogeneous federated transfer learning. In: IEEE BigData (2019)
19.
Zurück zum Zitat Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: identifying vulnerabilities in the machine learning model supply chain. CoRR, arXiv:1708.06733 (2017) Gu, T., Dolan-Gavitt, B., Garg, S.: BadNets: identifying vulnerabilities in the machine learning model supply chain. CoRR, arXiv:​1708.​06733 (2017)
20.
Zurück zum Zitat Hardy, S., et al.: Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. CoRR, arXiv:1711.10677 (2017) Hardy, S., et al.: Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. CoRR, arXiv:​1711.​10677 (2017)
21.
Zurück zum Zitat Hitaj, B., Ateniese, G., Pérez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: CSS, pp. 603–618 (2017) Hitaj, B., Ateniese, G., Pérez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: CSS, pp. 603–618 (2017)
22.
Zurück zum Zitat Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58 (2011) Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58 (2011)
24.
Zurück zum Zitat Kantarcioglu, M., Clifton, C.: Privacy-preserving distributed mining of association rules on horizontally partitioned data. IEEE Trans. Knowl. Data Eng. 16(9), 1026–1037 (2004)CrossRef Kantarcioglu, M., Clifton, C.: Privacy-preserving distributed mining of association rules on horizontally partitioned data. IEEE Trans. Knowl. Data Eng. 16(9), 1026–1037 (2004)CrossRef
25.
26.
Zurück zum Zitat Li, H., Ota, K., Dong, M.: Learning IoT in edge: deep learning for the Internet of Things with edge computing. IEEE Netw. 32(1), 96–101 (2018)CrossRef Li, H., Ota, K., Dong, M.: Learning IoT in edge: deep learning for the Internet of Things with edge computing. IEEE Netw. 32(1), 96–101 (2018)CrossRef
27.
Zurück zum Zitat Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. CoRR, arXiv:1908.07873 (2019) Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. CoRR, arXiv:​1908.​07873 (2019)
29.
Zurück zum Zitat Liu, Y., et al.: Fedvision: an online visual object detection platform powered by federated learning. In: IAAI (2020) Liu, Y., et al.: Fedvision: an online visual object detection platform powered by federated learning. In: IAAI (2020)
30.
Zurück zum Zitat Lyu, L., Bezdek, J.C., He, X., Jin, J.: Fog-embedded deep learning for the Internet of Things. IEEE Trans. Ind. Inform. 15(7), 4206–4215 (2019)CrossRef Lyu, L., Bezdek, J.C., He, X., Jin, J.: Fog-embedded deep learning for the Internet of Things. IEEE Trans. Ind. Inform. 15(7), 4206–4215 (2019)CrossRef
31.
Zurück zum Zitat Lyu, L., Bezdek, J.C., Jin, J., Yang, Y.: FORESEEN: towards differentially private deep inference for intelligent Internet of Things. IEEE J. Sel. Areas Commun. 38, 2418–2429 (2020)CrossRef Lyu, L., Bezdek, J.C., Jin, J., Yang, Y.: FORESEEN: towards differentially private deep inference for intelligent Internet of Things. IEEE J. Sel. Areas Commun. 38, 2418–2429 (2020)CrossRef
32.
Zurück zum Zitat Lyu, L., Li, Y., Nandakumar, K., Yu, J., Ma, X.: How to democratise and protect AI: fair and differentially private decentralised deep learning. IEEE Trans. Dependable Secur. Comput Lyu, L., Li, Y., Nandakumar, K., Yu, J., Ma, X.: How to democratise and protect AI: fair and differentially private decentralised deep learning. IEEE Trans. Dependable Secur. Comput
33.
Zurück zum Zitat Lyu, L., et al.: Towards fair and privacy-preserving federated deep models. IEEE Trans. Parallel Distrib. Syst. 31(11), 2524–2541 (2020)CrossRef Lyu, L., et al.: Towards fair and privacy-preserving federated deep models. IEEE Trans. Parallel Distrib. Syst. 31(11), 2524–2541 (2020)CrossRef
34.
Zurück zum Zitat McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017) McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
35.
Zurück zum Zitat McMahan, H.B., Moore, E., Ramage, D., y Arcas, B.A.: Federated learning of deep networks using model averaging. CoRR, arXiv:1602.05629 (2016) McMahan, H.B., Moore, E., Ramage, D., y Arcas, B.A.: Federated learning of deep networks using model averaging. CoRR, arXiv:​1602.​05629 (2016)
36.
Zurück zum Zitat McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. In: ICLR (2018) McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. In: ICLR (2018)
37.
Zurück zum Zitat Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: SP, pp. 691–706 (2019) Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: SP, pp. 691–706 (2019)
38.
Zurück zum Zitat Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: SP, pp. 739–753 (2019) Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: SP, pp. 739–753 (2019)
39.
Zurück zum Zitat Pan, X., Zhang, M., Ji, S., Yang, M.: Privacy risks of general-purpose language models. In: 2020 IEEE Symposium on Security and Privacy (SP), pp. 1314–1331. IEEE (2020) Pan, X., Zhang, M., Ji, S., Yang, M.: Privacy risks of general-purpose language models. In: 2020 IEEE Symposium on Security and Privacy (SP), pp. 1314–1331. IEEE (2020)
40.
Zurück zum Zitat Pan, X., Zhang, M., Wu, D., Xiao, Q., Ji, S., Yang, M.: Justinian’s GAAvernor: robust distributed learning with gradient aggregation agent. In: USENIX Security Symposium (2020) Pan, X., Zhang, M., Wu, D., Xiao, Q., Ji, S., Yang, M.: Justinian’s GAAvernor: robust distributed learning with gradient aggregation agent. In: USENIX Security Symposium (2020)
41.
Zurück zum Zitat Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018)CrossRef Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018)CrossRef
42.
Zurück zum Zitat Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: NeurIPS, pp. 6103–6113 (2018) Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: NeurIPS, pp. 6103–6113 (2018)
43.
Zurück zum Zitat Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: SP, pp. 3–18 (2017) Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: SP, pp. 3–18 (2017)
46.
Zurück zum Zitat Vaidya, J., Clifton, C.: Privacy preserving association rule mining in vertically partitioned data. In: KDD, pp. 639–644 (2002) Vaidya, J., Clifton, C.: Privacy preserving association rule mining in vertically partitioned data. In: KDD, pp. 639–644 (2002)
47.
Zurück zum Zitat Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)CrossRef Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)CrossRef
48.
Zurück zum Zitat Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., Yu, H.: Federated Learning. Morgan & Claypool Publishers, San Rafael (2019)CrossRef Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., Yu, H.: Federated Learning. Morgan & Claypool Publishers, San Rafael (2019)CrossRef
49.
Zurück zum Zitat Yin, D., Chen, Y., Ramchandran, K., Bartlett, P.: Byzantine-robust distributed learning: towards optimal statistical rates. CoRR, arXiv:1803.01498 (2018) Yin, D., Chen, Y., Ramchandran, K., Bartlett, P.: Byzantine-robust distributed learning: towards optimal statistical rates. CoRR, arXiv:​1803.​01498 (2018)
51.
Zurück zum Zitat Zhao, Y., et al.: Local differential privacy based federated learning for Internet of Things. arXiv preprint arXiv:2004.08856 (2020) Zhao, Y., et al.: Local differential privacy based federated learning for Internet of Things. arXiv preprint arXiv:​2004.​08856 (2020)
52.
Zurück zum Zitat Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: NeurIPS, pp. 14747–14756 (2019) Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: NeurIPS, pp. 14747–14756 (2019)
Metadaten
Titel
Threats to Federated Learning
verfasst von
Lingjuan Lyu
Han Yu
Jun Zhao
Qiang Yang
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-63076-8_1

Premium Partner