Skip to main content
Top

2024 | OriginalPaper | Chapter

Fully Distributed Deep Neural Network: F2D2N

Authors : Ernesto Leite, Fabrice Mourlin, Pierre Paradinas

Published in: Mobile, Secure, and Programmable Networking

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Recent advances in Artificial Intelligence (AI) have accelerated the adoption of AI at a pace never seen before. Large Language Models (LLM) trained on tens of billions of parameters show the crucial importance of parallelizing models. Different techniques exist for distributing Deep Neural Networks but they are challenging to implement. The cost of training GPU-based architectures is also becoming prohibitive. In this document we present a distributed approach that is easier to implement where data and model are distributed in processing units hosted on a cluster of machines based on CPUs or GPUs. Communication is done by message passing. The model is distributed over the cluster and stored locally or on a datalake. We prototyped this approach using open sources libraries and we present the benefits this implementation can bring.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Ben-Nun, T., Hoefler, T.: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis (2018) Ben-Nun, T., Hoefler, T.: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis (2018)
2.
go back to reference Chen, C.-C., Yang, C.-L., Cheng, H.-Y.: Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platform (2019) Chen, C.-C., Yang, C.-L., Cheng, H.-Y.: Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platform (2019)
3.
go back to reference Deng, L.: The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012)CrossRef Deng, L.: The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012)CrossRef
4.
go back to reference Hinton, G.: The Forward-Forward Algorithm: Some Preliminary Investigations (2022) Hinton, G.: The Forward-Forward Algorithm: Some Preliminary Investigations (2022)
5.
go back to reference Huang, Y., et al.: GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism (2019) Huang, Y., et al.: GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism (2019)
6.
go back to reference Jiang, Y., Fu, F., Miao, X., Nie, X., Cui, B.: OSDP: Optimal Sharded Data Parallel for Distributed Deep Learning (2023) Jiang, Y., Fu, F., Miao, X., Nie, X., Cui, B.: OSDP: Optimal Sharded Data Parallel for Distributed Deep Learning (2023)
7.
go back to reference Li, M.: Scaling distributed machine learning with the parameter server. In: Proceedings of the 2014 International Conference on Big Data Science and Computing, Beijing China, p. 1. ACM (2014) Li, M.: Scaling distributed machine learning with the parameter server. In: Proceedings of the 2014 International Conference on Big Data Science and Computing, Beijing China, p. 1. ACM (2014)
8.
go back to reference Li, S., et al.: PyTorch Distributed: Experiences on Accelerating Data Parallel Training (2020) Li, S., et al.: PyTorch Distributed: Experiences on Accelerating Data Parallel Training (2020)
9.
go back to reference Nielsen, M.A.: Neural Networks and Deep Learning (2015) Nielsen, M.A.: Neural Networks and Deep Learning (2015)
10.
go back to reference Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., Catanzaro, B.: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism (2020) Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., Catanzaro, B.: Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism (2020)
12.
go back to reference Wang, B., Xu, Q., Bian, Z., You, Y.: Tesseract: parallelize the tensor parallelism efficiently. In: Proceedings of the 51st International Conference on Parallel Processing, pp. 1–11 (2022) Wang, B., Xu, Q., Bian, Z., You, Y.: Tesseract: parallelize the tensor parallelism efficiently. In: Proceedings of the 51st International Conference on Parallel Processing, pp. 1–11 (2022)
Metadata
Title
Fully Distributed Deep Neural Network: F2D2N
Authors
Ernesto Leite
Fabrice Mourlin
Pierre Paradinas
Copyright Year
2024
DOI
https://doi.org/10.1007/978-3-031-52426-4_15

Premium Partner