Skip to main content

2020 | OriginalPaper | Buchkapitel

Artificial Neural Networks Training Acceleration Through Network Science Strategies

verfasst von : Lucia Cavallaro, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, Antonio Liotta

Erschienen in: Numerical Computations: Theory and Algorithms

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep Learning opened artificial intelligence to an unprecedented number of new applications. A critical success factor is the ability to train deeper neural networks, striving for stable and accurate models. This translates into Artificial Neural Networks (ANN) that become unmanageable as the number of features increases. The novelty of our approach is to employ Network Science strategies to tackle the complexity of the actual ANNs at each epoch of the training process. The work presented herein originates in our earlier publications, where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of our approach has also been recently confirmed by independent researchers, who managed to train a million-node ANN on non-specialized laptops. Encouraged by these results, we have now moved into having a closer look at some tunable parameters of our previous approach to pursue a further acceleration effect. We now investigate on the revise fraction parameter, to verify the necessity of the role of its double-check. Our method is independent of specific machine learning algorithms or datasets, since we operate merely on the topology of the ANNs. We demonstrate that the revise phase can be avoided in order to half the overall execution time with an almost negligible loss of quality.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
5.
Zurück zum Zitat Erdös, P., Rényi, A.: On random graphs I. Publ. Math. Debr. 6, 290–297 (1959)MATH Erdös, P., Rényi, A.: On random graphs I. Publ. Math. Debr. 6, 290–297 (1959)MATH
6.
Zurück zum Zitat Gardner, M., Dorling, S.: Artificial neural networks (the multilayer perceptron)–a review of applications in the atmospheric sciences. Atmos. Environ. 32(14–15), 2627–2636 (1998)CrossRef Gardner, M., Dorling, S.: Artificial neural networks (the multilayer perceptron)–a review of applications in the atmospheric sciences. Atmos. Environ. 32(14–15), 2627–2636 (1998)CrossRef
10.
Zurück zum Zitat Latora, V., Nicosia, V., Russo, G.: Complex Networks: Principles, Methods and Applications. Cambridge University Press, Cambridge (2017)CrossRef Latora, V., Nicosia, V., Russo, G.: Complex Networks: Principles, Methods and Applications. Cambridge University Press, Cambridge (2017)CrossRef
12.
Zurück zum Zitat Liu, S., Mocanu, D.C., Matavalam, A., Pei, Y., Pechenizkiy, M.: Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware. arXiv:1901.09181 (2019) Liu, S., Mocanu, D.C., Matavalam, A., Pei, Y., Pechenizkiy, M.: Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware. arXiv:​1901.​09181 (2019)
Metadaten
Titel
Artificial Neural Networks Training Acceleration Through Network Science Strategies
verfasst von
Lucia Cavallaro
Ovidiu Bagdasar
Pasquale De Meo
Giacomo Fiumara
Antonio Liotta
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-40616-5_27