Skip to main content
main-content

Über dieses Buch

Mathematics is playing an ever more important role in the physical and biological sciences, provoking a blurring of boundaries between scientific dis­ ciplines and a resurgence of interest in the modern as well as the classical techniques of applied mathematics. This renewal of interest, both in research and teaching, has led to the establishment of the series: Texts in Applied Mathe­ matics (TAM). The development of new courses is a natural consequence of a high level of excitement on the research frontier as newer techniques, such as numerical and symbolic computer systems, dynamical systems, and chaos, mix with and reinforce the traditional methods of applied mathematics. Thus, the purpose of this textbook series is to meet the current and future needs of these advances and encourage the teaching of new courses. TAM will publish textbooks suitable for use in advanced undergraduate and beginning graduate courses, and will complement the Applied Mathematical Sciences (AMS) series, which will focus on advanced textbooks and research level monographs. Preface A successful concurrent numerical simulation requires physics and math­ ematics to develop and analyze the model, numerical analysis to develop solution methods, and computer science to develop a concurrent implemen­ tation. No single course can or should cover all these disciplines. Instead, this course on concurrent scientific computing focuses on a topic that is not covered or is insufficiently covered by other disciplines: the algorith­ mic structure of numerical methods.

Inhaltsverzeichnis

Frontmatter

1. The Basics

Abstract
We begin our study of concurrent scientific computing with a study of the two most elementary scientific-computing problems: compute the sum of two vectors and compute the inner product of two vectors. These two trivial example problems are sufficient to introduce almost all concepts needed for our journey into concurrent scientific computing.
Eric F. Van de Velde

2. Vector and Matrix Operations

Abstract
The topic of Chapters 2 through 6 is concurrent linear algebra. This expanded coverage is motivated by two factors. First, linear algebra is a frequently used component of scientific computing. Computational problems recast in terms of linear algebra can often be considered solved, because standard libraries can tackle the remaining issues. Second, linear algebra is a rich source for numerical algorithms that are realistic in algorithmic complexity, but can be understood without an extensive mathematical background.
Eric F. Van de Velde

3. Iterative Methods

Abstract
The power method finds the dominant eigenvalues of a matrix. It will be covered in Section 3.1, not only because it is a useful method, but also because it provides a realistic computational context for the elementary operations of Chapters 1 and 2 without requiring an extensive mathematical background.
Eric F. Van de Velde

4. LU-Decomposition

Abstract
Iterative linear-system solvers can be fast and are usually easy to implement. However, they have a limited application range, as they impose requirements on the coefficient matrix ranging from positive definiteness to special sparsity structure. Direct solvers, on the other hand, are almost always applicable. Although their operation count and execution time can be large, both are very predictable given the problem size. In Chapters 4 and 5, we shall develop two different direct solvers for general linear systems of equations.
Eric F. Van de Velde

5. QR-Decomposition

Abstract
The QR-decomposition of a matrix is used to solve nearly singular, overde-termined, and underdetermined systems of linear equations. In specialized form, it is an important component of the QR-algorithm, which computes all eigenvalues and eigenvectors of a matrix. The multicomputer program for QR-decomposition is an application of recursive doubling, but several interesting complications arise.
Eric F. Van de Velde

6. Tridiagonal Solvers

Abstract
We shall develop one direct and one iterative solver for tridiagonal systems. The direct solver is based on a generalization of the recursive-doubling procedure known as full recursive doubling. The iterative tridiagonal solver is based on concurrent relaxation and is often a good alternative to any direct solver. Tridiagonal systems of linear equations often occur as part of a larger computation. They occur, for example, in some fast Poisson solvers and in alternating-direction methods. In these applications, one must usually solve many tridiagonal systems. Issues related to solving many tridiagonal systems are postponed until Chapter 8.
Eric F. Van de Velde

7. The Fast Fourier Transform

Abstract
We consider the space L 2 of complex-valued functions f(x) on the closed interval [0, 2π] that are bounded, continuous, and periodic with period 2π.
Eric F. Van de Velde

8. Poisson Solvers

Abstract
In this chapter, some elementary solution methods for the Poisson equation on a rectangular domain will be introduced. The algorithms used in the implementation of these elementary methods are fundamental building blocks in the construction of programs that solve other, more general, partial-differential equations.
Eric F. Van de Velde

9. Multigrid Methods

Abstract
The multigrid methodology has been successfully applied to a wide range of problems and applications. The design of a successful multigrid program usually requires advanced theoretical insight as well as practical experience in the particular application. Such discussions are clearly beyond the scope of this book. On the other hand, the wide applicability of the multigrid idea justifies devoting a separate chapter to an introductory study of multigrid methods and their multicomputer implementation.
Eric F. Van de Velde

10. Domain Decomposition

Abstract
In the concurrent-computing literature, there is ambiguity over the use of the terms “domain decomposition” and “data distribution.” Many authors use the term domain decomposition for data distributions in which the data local to each process topologically correspond to a subdomain of the computational domain. We shall not use the term domain decomposition in this sense. We reserve the term domain decomposition for certain numerical methods that split the computational domain into two or more sub-domains. Some elementary domain-decomposition methods were already encountered in Section 8.5, where domain-decomposed relaxation methods were discussed.
Eric F. Van de Velde

11. Particle Methods

Abstract
The motion of stars in the universe is determined by a gravitational field, which itself depends on the position and mass of all stars. We shall use this interesting astrophysical problem, called the N-body problem, to introduce particle methods. Algorithmically, these methods are substantially different from grid-oriented computations for two reasons. First, grid operators are short-range operators: they act only on neighboring grid points. Standard particle methods, on the other hand, feature long-range interactions: all particles interact with all other particles. Second, the load balance of multicomputer computations based on particle methods depends on computed values. This contrasts with grid-oriented computations, where the load balance depends only on the amount of data. From our experience with LU-decomposition, which is based on long-range operators and whose load balance is data-dependent through pivoting, we already know that data-distribution strategies play an important role for particle methods on multicomputers. However, as for grid-oriented computations, the data distribution is also tied to the geometry of the problem.
Eric F. Van de Velde

12. Computer Dependency

Abstract
Real multicomputers differ from the prototypical model introduced in Section 1.2. As a result, program derivations that target the model environment do not address some issues that arise on real multicomputers.
Eric F. Van de Velde

Backmatter

Weitere Informationen