Skip to main content

2018 | Buch

Applied Scientific Computing

With Python

verfasst von: Prof. Peter R. Turner, Prof. Thomas Arildsen, Prof. Kathleen Kavanagh

Verlag: Springer International Publishing

Buchreihe : Texts in Computer Science

insite
SUCHEN

Über dieses Buch

This easy-to-understand textbook presents a modern approach to learning numerical methods (or scientific computing), with a unique focus on the modeling and applications of the mathematical content. Emphasis is placed on the need for, and methods of, scientific computing for a range of different types of problems, supplying the evidence and justification to motivate the reader. Practical guidance on coding the methods is also provided, through simple-to-follow examples using Python.

Topics and features: provides an accessible and applications-oriented approach, supported by working Python code for many of the methods; encourages both problem- and project-based learning through extensive examples, exercises, and projects drawn from practical applications; introduces the main concepts in modeling, python programming, number representation, and errors; explains the essential details of numerical calculus, linear, and nonlinear equations, including the multivariable Newton method; discusses interpolation and the numerical solution of differential equations, covering polynomial interpolation, splines, and the Euler, Runge–Kutta, and shooting methods; presents largely self-contained chapters, arranged in a logical order suitable for an introductory course on scientific computing.

Undergraduate students embarking on a first course on numerical methods or scientific computing will find this textbook to be an invaluable guide to the field, and to the application of these methods across such varied disciplines as computer science, engineering, mathematics, economics, the physical sciences, and social science.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Motivation and Background
Abstract
This first, short, chapter sets the stage for the remainder of the book by introducing some of the underlying issues and some of the ways in which these can be tackled. Scientific Computing is by its very nature a subject that is important because it helps us solve real problems, and those real problems inevitably start with the need to “translate” a question from some other area of science into a mathematical model and then to solve the resulting mathematical problem. The solution will almost always be computational, and approximate, since most models are not amenable to paper and pencil approaches. Therefore we study the process of mathematical modeling and the fundamental topics of scientific computing per se. The latter includes the ability to code your solution algorithm. We have chosen Python as the language for this book and so this first chapter included introductions to both modeling and python programming. Any computational solution has errors inherent in the process. In this introduction we see that simplistic approaches to approximating a function through a power series are often not practical, but that a careful restructuring of the problem can overcome that.
Peter R. Turner, Thomas Arildsen, Kathleen Kavanagh
Chapter 2. Number Representations and Errors
Abstract
This chapter focuses on the different types of errors inherent in numerical processes. The most basic source of these errors arises from the fact that arithmetic in the computer is not exact except in very special circumstances. Even the representation of numbers - the floating point system - has errors inherent in it because numbers are represented in finite binary “words”. The IEEE floating point number representation is described and some of the consequences in terms of representation and arithmetic errors are explored. Errors in representation lead to errors in arithmetic and evaluation of the elementary functions that are so critical to almost all mathematical models and their solutions. This impacts rounding errors which typically accumulate during a lengthy computation and so must be controlled as much as possible. The IEEE Floating Point Standards help us know what bounds there are on these rounding errors, and therefore to estimate them, or even mitigate them. Numerical processes are themselves finite. The finiteness of processes gives rise to truncation errors, for example resulting from restricting the number of terms in a series that we compute. In other settings it might be a spatial, or temporal, discrete “step-size” that is used. Truncation and rounding errors interact. We cannot control one independent of the other.
Peter R. Turner, Thomas Arildsen, Kathleen Kavanagh
Chapter 3. Numerical Calculus
Abstract
This chapter has two primary topics: numerical differentiation and numerical integration. In some respects the methods of numerical differentiation are similar to those of numerical integration, in that they are typically based on using (in this case, differentiating) an interpolation polynomial. One major and important difference between numerical approaches to integration and differentiation is that integration is numerically a highly satisfactory operation with results of high accuracy being obtainable in economical ways. This is because integration tends to smooth out the errors of the polynomial approximations to the integrand. Unfortunately the reliability and stability we find in numerical integration is certainly not reflected for differentiation which tends to exaggerate the error in the original polynomial approximation. In the case of numerical integration we begin with the simple, and often already familiar approaches like the trapezoid rule, including its relation to the fundamental concept of a Reimann sum. The trapezoid rule and Simpson’s rule are explored in more detail which then leads to a discussion of so-called composite integration rules where the interval of integration is broken into many smaller pieces of the same size. The behavior of the errors in such rules is compared to that for the simple versions and this allows us to develop methods of achieving guaranteed precision at the desired level. These composite rules in turn provide a basis for the introduction to practical approaches to numerical integration which is our final major topic. Python implementation and the practical application of numerical calculus techniques are discussed throughout.
Peter R. Turner, Thomas Arildsen, Kathleen Kavanagh
Chapter 4. Linear Equations
Abstract
In this chapter we begin with the familiar problem of solving two (or more) linear equations in two (or more) unknowns. The most elementary approach is the basis for Gauss elimination. The effects of rounding error in Gauss elimination can be severe. This leads us to pivoting, which is essentially equivalent to changing the order of the equations, and which can usually overcome this loss of precision. Next we study LU factorization which allows for subsequent systems with the same coefficient matrix to be solved much more efficiently. This is applied to iterative refinement of the computed solution. The basic approach is still that of Gauss elimination but with the ability to do the manipulations on the matrix just once. Many large systems are sparse, meaning there are lots of zeros in the coefficient matrix. Iterative systems are often used for solution of large sparse system. We introduce the two fundamental approaches: Jacobi and Gauss-Seidel iterations. Next we turn to (linear) least squares approximation. This refers to the problem of finding the “best” fit to specified data using a linear combination of simpler functions such as the terms of a polynomial. The final topic of the chapter is the eigenvalue problem. The basic approach is the power method which is where we start. The power method provides estimates for the largest eigenvalue. With the help of Gershgorin’s theorem modifications of the power method are developed to find further eigenvalues, and their associated eigenvectors.
Peter R. Turner, Thomas Arildsen, Kathleen Kavanagh
Chapter 5. Iterative Solution of Nonlinear Equations
Abstract
This chapter addresses one of the most fundamental mathematical problems: the solution of a scalar nonlinear equation, \(f(x)=0\). All the methods presented are iterative in nature. We begin with perhaps the simplest idea – using bisection to reduce an interval which we know contains a solution to an acceptable tolerance. Next, we then present Newton’s method which is based on where the tangent line at a particular point would cross the axis. Provided we can get a “good enough” starting point Newton’s method will converge very quickly to the desired solution of the equation. Building on both the bisection method and Newton’s method gives rise to the secant method. Now instead of simply halving the interval we look at the point where the chord of the graph between the two ends of the interval would cross the axis. The secant method can have significantly faster convergence than mere bisection. It is both an improvement on bisection and a difference approximation to Newton’s method that does not require knowledge of the derivative. Finally, we present the setting for systems of nonlinear equations. Two of our standard modeling examples are extended from one unknown to two. The details for implementing Newton’s method for two equations and two unknowns are provided (in addition to the scalar algorithms) so that you have the solvers all ready to go and tackle the applied problems in this chapter – and beyond.
Peter R. Turner, Thomas Arildsen, Kathleen Kavanagh
Chapter 6. Interpolation
Abstract
The chapter begins with polynomial interpolation in which we seek a polynomial of specified degree that agrees with given data. The first approach is to use our knowledge of solving linear systems of equations to find the Lagrange interpolation polynomial by solving the Vandermonde system for the coefficients. However that is both inefficient and because of ill-conditioning subject to computational error. The use of the Lagrange basis polynomials is equivalent to transforming that system to a diagonal form, and is a more practical approach. Difference schemes allow both a more efficient use of the data, including adding new data points. Part of the significance of difference representations lies in the ability to recenter the data so that local data assumes greater importance relative to more distant data points. It also allows us to halt computation (effectively reducing the degree of the polynomial) so that the interpolation can be local. The final topic for this chapter is spline interpolation. Here the basic idea is to use low degree polynomials which connect as smoothly as possible as we move through the data. The example we focus on is cubic spline interpolation where the resulting function retains continuity in its slope and curvature at each data point. As always, practical problem-solving and implementation in Python are addressed.
Peter R. Turner, Thomas Arildsen, Kathleen Kavanagh
Chapter 7. Differential Equations
Abstract
Many models derived from real-life physical situations result in the need to solve a differential equation. To get an understanding of the basics of this enormous topic we begin with the simplest situation: a single first-order ordinary differential equation with a known initial condition. We begin with Euler’s method which provides a good introduction to most of the more advanced methods. Euler’s method provides a basis for methods based on either a Taylor series or a numerical integration based theory which allows the development of higher order methods. The first higher order methods we discuss here are the Runge-Kutta methods. Runge-Kutta methods are fixed step methods that use intermediate points to gain higher-order behavior. Next we turn to multistep methods which use data from multiple prior data points for explicit methods, or include the current target in implicit methods. The two can be used to advantage as a predictor-corrector pair. Treating the independent and dependent variables as vector quantities allows systems of differential equations to be approached using these same methods. Higher order differential equations can also be recast as systems of first-order equations. Shooting methods provide a good approach to (two-point) boundary value problems. The second initial condition (typically the slope) is an unknown and we solve for that unknown to ensure the final point is on target. The methods of Chap. 5 can be combined with our initial value problem methods to solve the resulting “equation”.
Peter R. Turner, Thomas Arildsen, Kathleen Kavanagh
Backmatter
Metadaten
Titel
Applied Scientific Computing
verfasst von
Prof. Peter R. Turner
Prof. Thomas Arildsen
Prof. Kathleen Kavanagh
Copyright-Jahr
2018
Electronic ISBN
978-3-319-89575-8
Print ISBN
978-3-319-89574-1
DOI
https://doi.org/10.1007/978-3-319-89575-8