Skip to main content

1996 | Buch

The Data Parallel Programming Model

Foundations, HPF Realization, and Scientific Applications

herausgegeben von: Guy-René Perrin, Alain Darte

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This monograph-like book assembles the thorougly revised and cross-reviewed lectures given at the School on Data Parallelism, held in Les Menuires, France, in May 1996.
The book is a unique survey on the current status and future perspectives of the currently very promising and popular data parallel programming model. Much attention is paid to the style of writing and complementary coverage of the relevant issues throughout the 12 chapters. Thus these lecture notes are ideally suited for advanced courses or self-instruction on data parallel programming. Furthermore, the book is indispensable reading for anybody doing research in data parallel programming and related areas.

Inhaltsverzeichnis

Frontmatter
Introduction
Guy-René Perrin
The data parallel programming model: A semantic perspective
Abstract
We provide a short introduction to the data parallel programming model. We argue that parallel computing often makes little distinction between the execution model and the programming model. This results in poor programming and low portability. Using the “GOTO considered harmful” analogy, we show that data parallelism can be seen as a way out of this difficulty. We show that important aspects of the data parallel model were already present in earlier approaches to parallel programming, and demonstrate that the data parallel programming model can be characterized by a small number of concepts with simple semantics.
Luc Bougé
An introduction to HPF
Abstract
This paper introduces the ideas that underly the data-parallel language High Performance Fortran (HPF). It reviews HPF's key language elements: elemental array parallelism and data mapping pragmas; the relationship between data mapping and implicit communication; the FORALL and INDEPENDENT loop mechanisms for more general data parallelism; and the standard HPF library, which adds to the richness of the array operators at the disposal of the HPF programmer. It reviews the important problem of data mapping at the procedure call interface. It also discusses interoperability with other programming models, including SPMD (single-program, multiple-data) programming.
The latter part of the paper is a review of the development of version 2.0 of HPF. The extended language, under development in 1996, includes a richer data mapping capability; an extension to the INDEPENDENT loop that allows reduction operations in the loop range; a means for directing the mapping of computation as well as data; and a way to specify concurrent execution of several parallel tasks on disjoint subsets of processors.
Robert S. Schreiber
A data parallel scientific computing introduction
Abstract
We first present data parallel algorithms for classical linear algebra methods. We analyze some of the main problems that a user has to solve. As examples, we propose data parallel algorithms for the Gauss and Gauss-Jordan methods. Thus, we introduce some criteria, such as the average data parallel computation ratio, to evaluate and compare data parallel algorithms. Our studies include both dense and sparse matrix computations. We describe in detail a data parallel structure to map general sparse matrices and we present data parallel sparse matrixvector multiplication. Then, we propose a data parallel preconditioned conjugate gradient algorithm using these matrix vector operations.
Serge G. Petiton, Nahid Emad
Current status of the SUIF research project
Abstract
This paper presents an overview of the SUIF (Stanford University Intermediate Format) research project and its current status. SUIF is a fully-functional research compiler that includes algorithms to analyze pointer aliases, find parallelism in outer loops across procedure calls, and optimize for parallelism and locality.
Monica S. Lam
Automatic parallelization in the polytope model
Abstract
The aim of this paper is to explain the importance of polytope and polyhedra in automatic parallelization. We show that the semantics of parallel programs is best described geometrically, as properties of sets of integral points in n-dimensional spaces, where n is related to the maximum nesting depth of DO loops. The needed properties translate nicely to properties of polyhedra, for which many algorithms have been designed for the needs of optimization and operation research. We show how these ideas apply to scheduling, placement and parallel code generation.
Paul Feautrier
State of the art in compiling HPF
Abstract
Proposing to the user a nice programming model based on the dataparallel paradigm is one thing. Running the resulting applications very fast is the next issue for a language aiming at high performance on massively parallel machines. This paper discusses the issues involved in HPF compilation and presents optimization techniques, targeting the message-passing SPMD programming model of distributed memory MIMD architectures.
Fabien Coelho, Cécile Germain, Jean-Louis Pazat
Tools for high performance fortran: A survey
Abstract
This survey has been constructed interactively on the World Wide Web with tool developers. Most of the descriptions of tools presented in this survey have been provided ”as is” by tools designers, and some have been partly rewritten. This chapter is a “snapshot” of the WWW document at July 10, 1996. The current version can be accessed at the following URL: http://www.irisa.fr/pampa/HPF/survey.html
Jean-Louis Pazat
Implementing data parallel programs on commodity clusters
Abstract
Parallel computing is moving rapidly from an era of “Big Iron” to a future that will be dominated by systems built from commodity components. Users will be able to construct high-performance systems by clustering off-the-shelf processors using widely available high-speed switches. A key question is how to organize the supporting software to best leverage these commodity hardware systems. We address this question by reviewing two aspects of how to best implement data parallel programs on commodity clusters. We identify research questions that arise at the interface between the network and the user program, and we also explore problems that arise when attempting to utilize multiprocessors as the basic building block for constructing clusters.
P. J. Hatcher, R. D. Russell, S. Kumaran, M. J. Quinn
Task parallelism and high-performance languages
Abstract
Data-parallel languages such as High Performance Fortran provide convenient, high-level notations for regular problems. However, they do not allow efficient expression of mixed task/data-parallel computations, or the coupling of separately compiled data-parallel modules. In this paper, we advocate the use of explicit task-parallel mechanisms for specifying task-parallel computations. We review the classes of problems for which such mechanisms can be useful, and describe two approaches to the implementation of these mechanisms. In Fortran M, task-parallel language extensions are introduced to support task-parallel computation. In HPF/MPI, a coordination library implementing the Message Passing Interface supports the exchange of data betwen HPF tasks.
Ian Foster
Supporting irregular and dynamic computations in data parallel languages
Abstract
Data parallel languages support a single instruction flow; the parallelism is expressed at the instruction level. Actually, data parallel languages have chosen arrays to support the parallelism. This regular data structure allows a natural development of regular parallel algorithms. The implementation of irregular algorithms necessitates a programming effort to project the irregular data structures onto regular structures. In this article we present the different techniques used to manage the irregularity in data parallel languages. Each of them will be illustrated with standard or experimental data parallel language constructions.
Jean-Luc Dekeyser, Philippe Marquet
Data parallelism and functional programming
Abstract
Data parallelism is often seen as a form of explicit parallelism for SIMD and vector machines, and data parallel programming as an explicit programming paradigm for these architectures. Data parallel languages possess certain software qualities as well, which justifies their use in higher level programming and specification closer to the algorithm domain. Thus, it is interesting to study how the data parallel paradigm can be best realized in a declarative setting, since declarative languages offer a pure view of computation which is good for these purposes. For numerical computing the functional programming paradigm is especially attractive, since numerical algorithms often are specified by recursion equations and thus can be translated more or less directly into recursive functional programs. Merging the data parallel and functional paradigms then yields languages and formalisms where many algorithms can be expressed in a very succinct fashion. In this paper we review data parallelism, functional programming, and existing approaches to the integration of the two paradigms. We then proceed to describe a formalism for data parallel functional programming, allowing very simple languages, where the view of aggregate data is particularly abstract. We explain how various data parallel operations can be expressed in this formalism. Finally, we conclude with a discussion of issues for languages based directly on the formalism.
Björn Lisper
Formal validation of data parallel programs: Introducing the assertional approach
Abstract
We present a proof system for a simple data parallel kernel language. This proof system is based on two-component assertions, where the current extent of parallelism is explicitly described. We define a weakest preconditions (WP) calculus and discuss the associated definability property. Thanks to this weakest preconditions calculus, we establish the completeness of the proof system. We finally discuss other approaches.
L. Bougé, D. Cachera, Y. Le Guyadec, G. Utard, B. Virot
Backmatter
Metadaten
Titel
The Data Parallel Programming Model
herausgegeben von
Guy-René Perrin
Alain Darte
Copyright-Jahr
1996
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-70646-5
Print ISBN
978-3-540-61736-5
DOI
https://doi.org/10.1007/3-540-61736-1