Skip to main content
Top

2023 | OriginalPaper | Chapter

Model Similarity-Based Defense Scheme Against Backdoor Attacks on Federated Learning

Authors : Long Su, Jun Ye, Longjuan Wang

Published in: Frontier Computing

Publisher: Springer Nature Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Federated learning is a deep learning model that helps thousands of participants train data without compromising their local data. For example, multiple smartphones can jointly learn and train the predicted value of a word under the keyboard without revealing what the user typed. However, federated learning is still fragile, and due to a large number of participating clients, it is impossible to guarantee that participants will engage in malicious behavior in the training process. Malicious actors will embed backdoors in their data, malicious clients will converge and show good accuracy in their primary task, and in a particular task, the attacker will embed backdoor input in a specific way that the attacker wants. Normally, the parameters sent by the abnormal client are completely different from those sent by the normal client. This paper proposes a new method to filter dishonest participants. The scheme mainly looks for the similarity between each client, identifies benign clients and abnormal clients by looking for the largest cluster, and then ignores the abnormal local model when the global model is updated.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Liu, D., Miller, T., Sayeed, R., Mandl, K.: Fadl: federated-autonomous deep learning for distributed electronic health record. arXiv preprint arXiv:1811.11400 (2018) Liu, D., Miller, T., Sayeed, R., Mandl, K.: Fadl: federated-autonomous deep learning for distributed electronic health record. arXiv preprint arXiv:​1811.​11400 (2018)
2.
go back to reference Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning, pp. 634–643 (2019) Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning, pp. 634–643 (2019)
3.
5.
go back to reference Kloft, M., Laskov, P.: Online anomaly detection under adversarial impact. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 405–412. JMLR Workshop and Conference Proceedings, March 2010 Kloft, M., Laskov, P.: Online anomaly detection under adversarial impact. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 405–412. JMLR Workshop and Conference Proceedings, March 2010
6.
go back to reference Shafahi, A., Huang, W.R., Najibi, M., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6106–6116 (2018) Shafahi, A., Huang, W.R., Najibi, M., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6106–6116 (2018)
7.
go back to reference Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 11957–11965, April 2020 Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 11957–11965, April 2020
9.
go back to reference Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938–2948. PMLR, June 2020 Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938–2948. PMLR, June 2020
10.
go back to reference Tran, B., Li, J., Mądry, A.: Spectral signatures in backdoor attacks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8011–8021, December 2018 Tran, B., Li, J., Mądry, A.: Spectral signatures in backdoor attacks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8011–8021, December 2018
Metadata
Title
Model Similarity-Based Defense Scheme Against Backdoor Attacks on Federated Learning
Authors
Long Su
Jun Ye
Longjuan Wang
Copyright Year
2023
Publisher
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-1428-9_265