Skip to main content
Top

2022 | OriginalPaper | Chapter

A Communication-Efficient Federated Learning: A Probabilistic Approach

Authors : Chaitanya Thuppari, Srikanth Jannu

Published in: Computing, Communication and Learning

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Federated learning (FL) empowers edge gadgets, similar to the IoT gadgets, servers, and industries that cooperatively prepare a model in the absence of respective privacy information. It expects gadgets that they trade their ML parameters. Subsequently, it expects to be together get familiar with a solid model relies upon the quantity of preparing ventures and boundary time for transmission per each step. Practically, It’s transmissions are regularly overseen at a large number of partaking gadgets on asset-restricted correspondence organizations, for example, remote organizations with restricted data transfer capacity and power. Along these lines, the rehashed FL boundary transmission from edge gadgets instigates a remarkable postponement, which might be bigger than the ML model preparation time by significant degrees. Thus, correspondence delay establishes a genuine issue in FL. A correspondence proficient FL system is proposed to together further develop the FL union time, hence the preparation misfortune. During this system, a probabilistic gadget choice plan is implied such as the gadgets which will altogether further develop the union speed and prepare misfortune to get the high probabilities to choose for the transmission model. We propose a novel technique to additionally decrease the transmission time and to downsize the amount of the model boundaries traded by gadgets, and an effective remote asset designation conspire is created. Reproduction results show that the proposed FL system can further develop identification exactness.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Voulodimos, A., Doulamis, N., Doulamis, A., Protopapadakis, E.: Deep learning for computer vision: a brief review. Comput. Intell. Neurosci. 2018, 1–13 (2018) Voulodimos, A., Doulamis, N., Doulamis, A., Protopapadakis, E.: Deep learning for computer vision: a brief review. Comput. Intell. Neurosci. 2018, 1–13 (2018)
2.
go back to reference Jurafsky, D., Martin, J.H.: Speech and Language Processing: An Introduction to Speech Recognition, Computational Linguistics and Natural Language Processing. Prentice-Hall, Upper Saddle River (2008) Jurafsky, D., Martin, J.H.: Speech and Language Processing: An Introduction to Speech Recognition, Computational Linguistics and Natural Language Processing. Prentice-Hall, Upper Saddle River (2008)
3.
go back to reference Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37, 50–60 (2020) Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37, 50–60 (2020)
4.
go back to reference Chen, T., Giannakis, G., Sun, T., Yin, W.: LAG: lazily aggregated gradient for communication-efficient distributed learning. In: Bengio, S., et al. (eds.) Proceedings of Advances in Neural Information Processing Systems, p. 2440. Neural Information Processing Systems Foundation (2018) Chen, T., Giannakis, G., Sun, T., Yin, W.: LAG: lazily aggregated gradient for communication-efficient distributed learning. In: Bengio, S., et al. (eds.) Proceedings of Advances in Neural Information Processing Systems, p. 2440. Neural Information Processing Systems Foundation (2018)
5.
go back to reference Yang, H.H., Liu, Z., Quek, T.Q.S., Poor, H.V.: Scheduling policies for federated learning in wireless networks. IEEE Trans. Commun. 68, 317–333 (2020)CrossRef Yang, H.H., Liu, Z., Quek, T.Q.S., Poor, H.V.: Scheduling policies for federated learning in wireless networks. IEEE Trans. Commun. 68, 317–333 (2020)CrossRef
6.
go back to reference Ren, J., et al.: Scheduling for cellular federated edge learning with importance and channel awareness. IEEE Trans. Wirel. Commun. 19, 7690–7703 (2020)CrossRef Ren, J., et al.: Scheduling for cellular federated edge learning with importance and channel awareness. IEEE Trans. Wirel. Commun. 19, 7690–7703 (2020)CrossRef
7.
go back to reference Amiri, M.M., Gunduz, D.: Machine learning at the wireless edge: distributed stochastic gradient descent over-the-air. In: Proceedings of 2019 IEEE International Symposium on Information Theory (ISIT), pp. 1432–1436. Institute of Electrical and Electronics Engineers (2019) Amiri, M.M., Gunduz, D.: Machine learning at the wireless edge: distributed stochastic gradient descent over-the-air. In: Proceedings of 2019 IEEE International Symposium on Information Theory (ISIT), pp. 1432–1436. Institute of Electrical and Electronics Engineers (2019)
8.
go back to reference Shlezinger, N., Chen, M., Eldar, Y.C., Poor, H.V., Cui, S.: UVeQFed: universal vector quantization for federated learning. IEEE Trans. Signal Process. 69, 500–514 (2021)MathSciNetCrossRefMATH Shlezinger, N., Chen, M., Eldar, Y.C., Poor, H.V., Cui, S.: UVeQFed: universal vector quantization for federated learning. IEEE Trans. Signal Process. 69, 500–514 (2021)MathSciNetCrossRefMATH
10.
go back to reference Mills, J., Jia, H., Min, G.: Communication-efficient federated learning for wireless edge intelligence in IoT. IEEE Internet Things J. 7(7), 5986–5994 (2019)CrossRef Mills, J., Jia, H., Min, G.: Communication-efficient federated learning for wireless edge intelligence in IoT. IEEE Internet Things J. 7(7), 5986–5994 (2019)CrossRef
11.
go back to reference Popovski, P., et al.: Wireless access for ultra-reliable low-latency communication (URLLC): principles and building blocks. IEEE Netw. 32, 16–23 (2018)CrossRef Popovski, P., et al.: Wireless access for ultra-reliable low-latency communication (URLLC): principles and building blocks. IEEE Netw. 32, 16–23 (2018)CrossRef
13.
go back to reference Kang, J., et al.: Reliable federated learning for mobile networks. IEEE Wirel. Commun. 27(2), 72–80 (2020)CrossRef Kang, J., et al.: Reliable federated learning for mobile networks. IEEE Wirel. Commun. 27(2), 72–80 (2020)CrossRef
14.
go back to reference McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282 (2017) McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282 (2017)
15.
go back to reference Bonawitz, K., et al.: Towards federated learning at scale: System design. In: Proceedings of the 2019 11th International Conference on Systems and Machine Learning. Association for Computing Machinery (2019) Bonawitz, K., et al.: Towards federated learning at scale: System design. In: Proceedings of the 2019 11th International Conference on Systems and Machine Learning. Association for Computing Machinery (2019)
16.
go back to reference McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017) McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
17.
go back to reference Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127 (2018) Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. arXiv preprint arXiv:​1812.​06127 (2018)
Metadata
Title
A Communication-Efficient Federated Learning: A Probabilistic Approach
Authors
Chaitanya Thuppari
Srikanth Jannu
Copyright Year
2022
DOI
https://doi.org/10.1007/978-3-031-21750-0_16

Premium Partner