Skip to main content

2019 | OriginalPaper | Buchkapitel

Densifying Assumed-Sparse Tensors

Improving Memory Efficiency and MPI Collective Performance During Tensor Accumulation for Parallelized Training of Neural Machine Translation Models

verfasst von : Derya Cavdar, Valeriu Codreanu, Can Karakus, John A. Lockman III, Damian Podareanu, Vikram Saletore, Alexander Sergeev, Don D. Smith II, Victor Suthichai, Quy Ta, Srinivas Varadharajan, Lucas A. Wilson, Rengan Xu, Pei Yang

Erschienen in: High Performance Computing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Neural machine translation - using neural networks to translate human language - is an area of active research exploring new neuron types and network topologies with the goal of dramatically improving machine translation performance. Current state-of-the-art approaches, such as the multi-head attention-based transformer, require very large translation corpuses and many epochs to produce models of reasonable quality. Recent attempts to parallelize the official TensorFlow “Transformer” model across multiple nodes have hit roadblocks due to excessive memory use and resulting out of memory errors when performing MPI collectives.
This paper describes modifications made to the Horovod MPI-based distributed training framework to reduce memory usage for transformer models by converting assumed-sparse tensors to dense tensors, and subsequently replacing sparse gradient gather with dense gradient reduction. The result is a dramatic increase in scale-out capability, with CPU-only scaling tests achieving 91% weak scaling efficiency up to 1200 MPI processes (300 nodes), and up to 65% strong scaling efficiency up to 400 MPI processes (200 nodes) using the Stampede2 supercomputer.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Computing Research Repository (CoRR), abs/11409.0473v7, September 2014 Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Computing Research Repository (CoRR), abs/11409.0473v7, September 2014
2.
Zurück zum Zitat Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. Computing Research Repository (CoRR), abs/1406.1078v3, September 2014 Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. Computing Research Repository (CoRR), abs/1406.1078v3, September 2014
3.
Zurück zum Zitat Collobert, R., Puhrsch, C., Synnaeve, G.: Wav2Letter: an End-to-End ConvNet-based speech recognition system. Computing Research Repository (CoRR), abs/1609.03193v2, September 2016 Collobert, R., Puhrsch, C., Synnaeve, G.: Wav2Letter: an End-to-End ConvNet-based speech recognition system. Computing Research Repository (CoRR), abs/1609.03193v2, September 2016
4.
Zurück zum Zitat Gehring, J., et al.: Convolutional sequence to sequence learning. Computing Research Repository (CoRR), abs/1705.03122v3, July 2017 Gehring, J., et al.: Convolutional sequence to sequence learning. Computing Research Repository (CoRR), abs/1705.03122v3, July 2017
7.
Zurück zum Zitat Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018) Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)
8.
Zurück zum Zitat Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678, November 2014 Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678, November 2014
9.
Zurück zum Zitat Johnson, M., et al.: Google’s multilingual neural machine translation system: enabling zero-shot translation. Computing Research Repository (CoRR), abs/1611.04558v2, August 2017 Johnson, M., et al.: Google’s multilingual neural machine translation system: enabling zero-shot translation. Computing Research Repository (CoRR), abs/1611.04558v2, August 2017
10.
Zurück zum Zitat Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proceedings of 2003 Human Language Technology Conference (HLT-NAACL), pp. 48–54, June 2003 Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proceedings of 2003 Human Language Technology Conference (HLT-NAACL), pp. 48–54, June 2003
11.
Zurück zum Zitat Kuchaiev, O., Ginsburg, B., Gitman, I., Lavrukhin, V., Case, C., Micikevicius, P.: Mixed-precision training for NLP and speech recognition with OpenSeq2Seq. Computing Research Repository (CoRR), abs/1805.10387v2, November 2018 Kuchaiev, O., Ginsburg, B., Gitman, I., Lavrukhin, V., Case, C., Micikevicius, P.: Mixed-precision training for NLP and speech recognition with OpenSeq2Seq. Computing Research Repository (CoRR), abs/1805.10387v2, November 2018
12.
Zurück zum Zitat Ott, M., Edunov, S., Grangier, D., Auli, M.: Scaling neural machine translation. Computing Research Repository (CoRR), abs/1806.00187v3, September 2018 Ott, M., Edunov, S., Grangier, D., Auli, M.: Scaling neural machine translation. Computing Research Repository (CoRR), abs/1806.00187v3, September 2018
13.
Zurück zum Zitat Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002) Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)
14.
Zurück zum Zitat Paszke, A.: Automatic differentiation in PyTorch, December 2017 Paszke, A.: Automatic differentiation in PyTorch, December 2017
15.
Zurück zum Zitat Popel, M., Bojar, O.: Training tips for the transformer model. Computing Research Repository (CoRR), abs/1804.00247v2, May 2018 Popel, M., Bojar, O.: Training tips for the transformer model. Computing Research Repository (CoRR), abs/1804.00247v2, May 2018
16.
Zurück zum Zitat Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. Computing Research Repository (CoRR), abs/1509.00685v2, September 2015 Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. Computing Research Repository (CoRR), abs/1509.00685v2, September 2015
17.
Zurück zum Zitat Schwan, P., et al.: Lustre: building a file system for 1000-node clusters. In: Proceedings of the 2003 Linux Symposium, vol. 2003, pp. 380–386 (2003) Schwan, P., et al.: Lustre: building a file system for 1000-node clusters. In: Proceedings of the 2003 Linux Symposium, vol. 2003, pp. 380–386 (2003)
18.
Zurück zum Zitat Stanzione, D., et al.: Stampede 2: the evolution of an XSEDE supercomputer. In: Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC 2017, pp. 15:1–15:8. ACM, New York (2017) Stanzione, D., et al.: Stampede 2: the evolution of an XSEDE supercomputer. In: Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC 2017, pp. 15:1–15:8. ACM, New York (2017)
19.
Zurück zum Zitat Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS Proceedings Advances in Neural Information Processing Systems 27, pp. 3104–3112, December 2014 Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS Proceedings Advances in Neural Information Processing Systems 27, pp. 3104–3112, December 2014
22.
Zurück zum Zitat Vaswani, A., et al.: Attention is all you need. Computing Research Repository (CoRR), abs/1706.03762v5, December 2017 Vaswani, A., et al.: Attention is all you need. Computing Research Repository (CoRR), abs/1706.03762v5, December 2017
23.
Zurück zum Zitat Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017) Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)
24.
Zurück zum Zitat Yamashita, T., Hirasawa, K., Hu, J.: Application of multi-branch neural networks to stock market prediction. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks 2005, vol. 4, pp. 2544–2548. IEEE (2005) Yamashita, T., Hirasawa, K., Hu, J.: Application of multi-branch neural networks to stock market prediction. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks 2005, vol. 4, pp. 2544–2548. IEEE (2005)
25.
Zurück zum Zitat Yamashita, T., Hirasawa, K., Hu, J., Murata, J.: Multi-branch structure of layered neural networks. In: Proceedings of the 9th International Conference on Neural Information Processing 2002, ICONIP 2002, vol. 1, pp. 243–247. IEEE (2002) Yamashita, T., Hirasawa, K., Hu, J., Murata, J.: Multi-branch structure of layered neural networks. In: Proceedings of the 9th International Conference on Neural Information Processing 2002, ICONIP 2002, vol. 1, pp. 243–247. IEEE (2002)
Metadaten
Titel
Densifying Assumed-Sparse Tensors
verfasst von
Derya Cavdar
Valeriu Codreanu
Can Karakus
John A. Lockman III
Damian Podareanu
Vikram Saletore
Alexander Sergeev
Don D. Smith II
Victor Suthichai
Quy Ta
Srinivas Varadharajan
Lucas A. Wilson
Rengan Xu
Pei Yang
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-20656-7_2

Premium Partner