Skip to main content
Top

2012 | OriginalPaper | Chapter

11. An Introduction to Parallel Programming Using MPI

Authors : Dr. Joe Pitt-Francis, Dr. Jonathan Whiteley

Published in: Guide to Scientific Computing in C++

Publisher: Springer London

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The most common type of high-performance parallel computer is a distributed memory computer: a computer that consists of many processors, each with their own individual memory, that can only access the data stored by other processors by passing messages across a network. This chapter serves as an introduction to the Message Passing Interface (MPI), which is a widely used library of code for writing parallel programs on distributed memory architectures. Although the MPI libraries contain many different functions, basic code may be written using only a very small subset of these functions. By providing a basic guide to these commonly used functions you will be able to write simple MPI programs, edit MPI programs written by other programmers, and understand the function calls when using a scientific library built on MPI.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
The Portable Extensible Toolkit for Scientific Computing (PETSc, pronounced “pet see”) is a library providing functionality for the solution of linear and nonlinear systems of equations on both sequential and parallel architectures.
 
2
There are several programming libraries which allow the programmer access to a distributed shared memory computer where machines over a network act as if they were part on one contiguous system. There has, however, not been wide-spread use of these libraries at the time of writing.
 
3
MPI implementations vary in how they return console input from the individual processes to the console from which the program was launched. Even when flush is called on the cout stream it may still be the case that the MPI machinery is buffering output.
 
4
MPI::ANY_TAG and MPI::ANY_SOURCE are C++ names for these wild-card values. Many codes use the interchangeable C names: MPI_ANY_TAG and MPI_ANY_SOURCE.
 
5
Note that these are the C++ object names for these types—they are also called synonymously by their C names: MPI_BOOL,  MPI_CHAR,  MPI_INT and MPI_DOUBLE.
 
6
There are a few standard ways of getting data to file from a parallel program: concentration, where one process does all the writing, as suggested above; round-robin where processes take it in turns to open and close the same file; parallel file libraries such as MPI’s MPIIO; and separate files where each process writes data to different places to be re-assembled later. The choice of output method is largely dependent on the data structure and size.
 
Literature
9.
go back to reference Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface, 2nd edn. Massachusetts Institute of Technology Press, Cambridge (1999) Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface, 2nd edn. Massachusetts Institute of Technology Press, Cambridge (1999)
10.
go back to reference Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface. Massachusetts Institute of Technology Press, Cambridge (1999) Gropp, W., Lusk, E., Thakur, R.: Using MPI-2: Advanced Features of the Message-Passing Interface. Massachusetts Institute of Technology Press, Cambridge (1999)
Metadata
Title
An Introduction to Parallel Programming Using MPI
Authors
Dr. Joe Pitt-Francis
Dr. Jonathan Whiteley
Copyright Year
2012
Publisher
Springer London
DOI
https://doi.org/10.1007/978-1-4471-2736-9_11

Premium Partner