Skip to main content

2015 | OriginalPaper | Buchkapitel

4. Efficient Implementation of the Force Calculation in MD Simulations

verfasst von : Alexander Heinecke, Wolfgang Eckhardt, Martin Horsch, Hans-Joachim Bungartz

Erschienen in: Supercomputing for Molecular Dynamics Simulations

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter describes how the computational kernel of MD simulations, the force calculation between particles, can be mapped to different kinds of hardware by applying minimal changes to the software. Since ls1 mardyn is based on the so-called linked-cells algorithm, several difference facets of this approach are optimized. First, we present a newly developed sliding window traversal of the entire data structure which enables the seamless integration of new optimizations such as the vectorization of the Lennard-Jones-12-6 potential. Second, we describe and evaluate several variants of mapping this potential to today’s SIMD/vector hardware using intrinsics at the example of the Intel Xeon processor and the Intel Xeon Phi coprocessor, in dependence on the functionality offered by the hardware. This is done for single-center and as well for multicentered rigid-body molecules.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat J. Mellor-Crummey, D. Whalley, K. Kennedy, Improving memory hierarchy performance for irregular applications using data and computation reorderings. Int. J. Parallel Program. 29, 217–247 (2001)CrossRefMATH J. Mellor-Crummey, D. Whalley, K. Kennedy, Improving memory hierarchy performance for irregular applications using data and computation reorderings. Int. J. Parallel Program. 29, 217–247 (2001)CrossRefMATH
2.
Zurück zum Zitat S. Meloni, M. Rosati, L. Colombo, Efficient particle labelling in atomistic simulations. J. Chem. Phys. 126(12), 121102 (2007)CrossRef S. Meloni, M. Rosati, L. Colombo, Efficient particle labelling in atomistic simulations. J. Chem. Phys. 126(12), 121102 (2007)CrossRef
3.
Zurück zum Zitat M. Schoen, Structure of a simple molecular dynamics FORTRAN program optimized for CRAY vector processing computers. Comput. Phys. Commun. 52(2), 175–185 (1989)CrossRefMathSciNet M. Schoen, Structure of a simple molecular dynamics FORTRAN program optimized for CRAY vector processing computers. Comput. Phys. Commun. 52(2), 175–185 (1989)CrossRefMathSciNet
4.
Zurück zum Zitat G.S. Grest, B. Dnweg, K. Kremer, Vectorized link cell Fortran code for molecular dynamics simulations for a large number of particles. Comput. Phys. Commun. 55(3), 269–285 (1989)CrossRef G.S. Grest, B. Dnweg, K. Kremer, Vectorized link cell Fortran code for molecular dynamics simulations for a large number of particles. Comput. Phys. Commun. 55(3), 269–285 (1989)CrossRef
5.
Zurück zum Zitat R. Everaers, K. Kremer, A fast grid search algorithm for molecular dynamics simulations with short-range interactions. Comput. Phys. Commun. 81(12), 19–55 (1994)CrossRef R. Everaers, K. Kremer, A fast grid search algorithm for molecular dynamics simulations with short-range interactions. Comput. Phys. Commun. 81(12), 19–55 (1994)CrossRef
6.
Zurück zum Zitat D.C. Rapaport, Large-scale molecular dynamics simulation using vector and parallel computers. Comput. Phys. Rep. 9, 1–53 (1988)CrossRef D.C. Rapaport, Large-scale molecular dynamics simulation using vector and parallel computers. Comput. Phys. Rep. 9, 1–53 (1988)CrossRef
7.
Zurück zum Zitat D.C. Rapaport, The Art of Molecular Dynamics Simulation (Cambridge University Press, Cambridge, 2004)CrossRefMATH D.C. Rapaport, The Art of Molecular Dynamics Simulation (Cambridge University Press, Cambridge, 2004)CrossRefMATH
8.
Zurück zum Zitat D.C. Rapaport, Multibillion-atom molecular dynamics simulation: design considerations for vector-parallel processing. Comput. Phys. Commun. 174(7), 521–529 (2006)CrossRefMATHMathSciNet D.C. Rapaport, Multibillion-atom molecular dynamics simulation: design considerations for vector-parallel processing. Comput. Phys. Commun. 174(7), 521–529 (2006)CrossRefMATHMathSciNet
9.
Zurück zum Zitat K. Benkert, F. Gähler, Molecular Dynamics on NEC Vector Systems (Springer, Berlin, 2007), pp. 145–152 K. Benkert, F. Gähler, Molecular Dynamics on NEC Vector Systems (Springer, Berlin, 2007), pp. 145–152
10.
Zurück zum Zitat E. Lindahl, B. Hess, D. van der Spoel, GROMACS 3.0: a package for molecular simulation and trajectory analysis. J. Mol. Model. 7, 306–317 (2001) E. Lindahl, B. Hess, D. van der Spoel, GROMACS 3.0: a package for molecular simulation and trajectory analysis. J. Mol. Model. 7, 306–317 (2001)
11.
Zurück zum Zitat S. Olivier, J. Prins, J. Derby, K. Vu, Porting the GROMACS molecular dynamics code to the cell processor, in IEEE International Parallel and Distributed Processing Symposium, IPDPS 2007, pp. 1–8 (2007) S. Olivier, J. Prins, J. Derby, K. Vu, Porting the GROMACS molecular dynamics code to the cell processor, in IEEE International Parallel and Distributed Processing Symposium, IPDPS 2007, pp. 1–8 (2007)
12.
Zurück zum Zitat L. Peng, M. Kunaseth, H. Dursun, K.-I. Nomura, W. Wang, R. Kalia, A. Nakano, P. Vashishta, Exploiting hierarchical parallelisms for molecular dynamics simulation on multicore clusters. J. Supercomput. 57, 20–33 (2011)CrossRef L. Peng, M. Kunaseth, H. Dursun, K.-I. Nomura, W. Wang, R. Kalia, A. Nakano, P. Vashishta, Exploiting hierarchical parallelisms for molecular dynamics simulation on multicore clusters. J. Supercomput. 57, 20–33 (2011)CrossRef
13.
Zurück zum Zitat S. Pll, B. Hess, A flexible algorithm for calculating pair interactions on SIMD architectures. Comput. Phys. Commun. (2013) (accepted for publication) S. Pll, B. Hess, A flexible algorithm for calculating pair interactions on SIMD architectures. Comput. Phys. Commun. (2013) (accepted for publication)
14.
Zurück zum Zitat W. Eckhardt, A. Heinecke, W. Hölzl, H.-J. Bungartz,Vectorization of multi-center, highly-parallel rigid-body molecular dynamics simulations, in Supercomputing 2013, The International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, (IEEE, Poster abstract, 2013) W. Eckhardt, A. Heinecke, W. Hölzl, H.-J. Bungartz,Vectorization of multi-center, highly-parallel rigid-body molecular dynamics simulations, in Supercomputing 2013, The International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, (IEEE, Poster abstract, 2013)
15.
Zurück zum Zitat S. Pennycook, C. Hughes, M. Smelyanskiy, S. Jarvis, Exploring SIMD for molecular dynamics, using Intel Xeon processors and Intel Xeon Phi coprocessors, in IEEE 27th International Symposium on Parallel Distributed Processing (IPDPS), pp. 1085–1097 (2013) S. Pennycook, C. Hughes, M. Smelyanskiy, S. Jarvis, Exploring SIMD for molecular dynamics, using Intel Xeon processors and Intel Xeon Phi coprocessors, in IEEE 27th International Symposium on Parallel Distributed Processing (IPDPS), pp. 1085–1097 (2013)
16.
Zurück zum Zitat W. Eckhardt, A. Heinecke, An efficient vectorization of linked-cell particle simulations. in ACM International Conference on Computing Frontiers (Cagliari, 2012), pp. 241–243 W. Eckhardt, A. Heinecke, An efficient vectorization of linked-cell particle simulations. in ACM International Conference on Computing Frontiers (Cagliari, 2012), pp. 241–243
17.
Zurück zum Zitat W. Eckhardt, A. Heinecke, R. Bader, M. Brehm, N. Hammer, H. Huber, H.-G. Kleinhenz, J. Vrabec, H. Hasse, M. Horsch, M. Bernreuther, C. Glass, C. Niethammer, A. Bode, H.-J. Bungartz. 591 TFLOPS multi-trillion particles simulation on SuperMUC, in Proceedings of the International Supercomputing Conference (ISC), Lecture Notes in Computer Science. vol. 7905 (Springer, Leipzig, 2013), pp. 1–12 W. Eckhardt, A. Heinecke, R. Bader, M. Brehm, N. Hammer, H. Huber, H.-G. Kleinhenz, J. Vrabec, H. Hasse, M. Horsch, M. Bernreuther, C. Glass, C. Niethammer, A. Bode, H.-J. Bungartz. 591 TFLOPS multi-trillion particles simulation on SuperMUC, in Proceedings of the International Supercomputing Conference (ISC), Lecture Notes in Computer Science. vol. 7905 (Springer, Leipzig, 2013), pp. 1–12
18.
Zurück zum Zitat J. Roth, F. Gähler, H.-R. Trebin, A molecular dynamics run with 5 180 116 000 particles. Int. J. Mod. Phys. C 11(02), 317–322 (2000)CrossRef J. Roth, F. Gähler, H.-R. Trebin, A molecular dynamics run with 5 180 116 000 particles. Int. J. Mod. Phys. C 11(02), 317–322 (2000)CrossRef
19.
Zurück zum Zitat T.C. Germann, K. Kadau, Trillion-atom molecular dynamics becomes a reality. Int. J. Mod. Phys. C 19(09), 1315–1319 (2008)CrossRefMATH T.C. Germann, K. Kadau, Trillion-atom molecular dynamics becomes a reality. Int. J. Mod. Phys. C 19(09), 1315–1319 (2008)CrossRefMATH
20.
Zurück zum Zitat K. Kadau, T.C. Germann, P.S. Lomdahl, Molecular dynamics comes of age: 320 billion atom simulation on BlueGene/L. Int. J. Mod. Phys. C 17(12), 1755–1761 (2006)CrossRef K. Kadau, T.C. Germann, P.S. Lomdahl, Molecular dynamics comes of age: 320 billion atom simulation on BlueGene/L. Int. J. Mod. Phys. C 17(12), 1755–1761 (2006)CrossRef
21.
Zurück zum Zitat A. Rahimian, I. Lashuk, S. Veerapaneni, A. Chandramowlishwaran, D. Malhotra, L. Moon, R. Sampath, A. Shringarpure, J. Vetter, R. Vuduc, D. Zorin, G. Biros, Petascale direct numerical simulation of blood flow on 200k cores and heterogeneous architectures, in Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC’10 (IEEE Computer Society, Washington, 2010), pp. 1–11 A. Rahimian, I. Lashuk, S. Veerapaneni, A. Chandramowlishwaran, D. Malhotra, L. Moon, R. Sampath, A. Shringarpure, J. Vetter, R. Vuduc, D. Zorin, G. Biros, Petascale direct numerical simulation of blood flow on 200k cores and heterogeneous architectures, in Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC’10 (IEEE Computer Society, Washington, 2010), pp. 1–11
22.
Zurück zum Zitat I. Kabadshow, H. Dachsel, J. Hammond, Poster: passing the three trillion particle limit with an error-controlled fast multipole method, in Proceedings of the 2011 Companion on High Performance Computing Networking, Storage and Analysis Companion, SC’11 Companion (ACM, New York, 2011), pp. 73–74 I. Kabadshow, H. Dachsel, J. Hammond, Poster: passing the three trillion particle limit with an error-controlled fast multipole method, in Proceedings of the 2011 Companion on High Performance Computing Networking, Storage and Analysis Companion, SC’11 Companion (ACM, New York, 2011), pp. 73–74
23.
Zurück zum Zitat W. Eckhardt, T. Neckel, Memory-efficient implementation of a rigid-body molecular dynamics simulation, in Proceedings of the 11th International Symposium on Parallel and Distributed Computing—ISPDC 2012 (IEEE, Munich, 2012), pp. 103–110 W. Eckhardt, T. Neckel, Memory-efficient implementation of a rigid-body molecular dynamics simulation, in Proceedings of the 11th International Symposium on Parallel and Distributed Computing—ISPDC 2012 (IEEE, Munich, 2012), pp. 103–110
Metadaten
Titel
Efficient Implementation of the Force Calculation in MD Simulations
verfasst von
Alexander Heinecke
Wolfgang Eckhardt
Martin Horsch
Hans-Joachim Bungartz
Copyright-Jahr
2015
DOI
https://doi.org/10.1007/978-3-319-17148-7_4

Neuer Inhalt