Skip to main content

2010 | Buch

Numerical Methods for Ordinary Differential Equations

Initial Value Problems

insite
SUCHEN

Über dieses Buch

Numerical Methods for Ordinary Differential Equations is a self-contained introduction to a fundamental field of numerical analysis and scientific computation. Written for undergraduate students with a mathematical background, this book focuses on the analysis of numerical methods without losing sight of the practical nature of the subject.

It covers the topics traditionally treated in a first course, but also highlights new and emerging themes. Chapters are broken down into `lecture' sized pieces, motivated and illustrated by numerous theoretical and computational examples.

Over 200 exercises are provided and these are starred according to their degree of difficulty. Solutions to all exercises are available to authorized instructors.

The book covers key foundation topics:

o Taylor series methods

o Runge--Kutta methods

o Linear multistep methods

o Convergence

o Stability

and a range of modern themes:

o Adaptive stepsize selection

o Long term dynamics

o Modified equations

o Geometric integration

o Stochastic differential equations

The prerequisite of a basic university-level calculus class is assumed, although appropriate background results are also summarized in appendices. A dedicated website for the book containing extra information can be found via www.springer.com

Inhaltsverzeichnis

Frontmatter
1. ODEs—An Introduction
Abstract
Mathematical models in a vast range of disciplines, from science and technology to sociology and business, describe how quantities change. This leads naturally to the language of ordinary differential equations (ODEs). Typically, we first encounter ODEs in basic calculus courses, and we see examples that can be solved with pencil-and-paper techniques. This way, we learn about ODEs that are linear (constant or variable coefficient), homogeneous or inhomogeneous, separable, etc. Other ODEs not belonging to one of these classes may also be solvable by special one-off tricks. However, what motivates this book is the fact that the overwhelming majority of ODEs do not have solutions that can be expressed in terms of simple functions.
David F. Griffiths, Desmond J. Higham
2. Euler’s Method
Abstract
During the course of this book we will describe three families of methods for numerically solving IVPs: the Taylor series (TS) method, linear multistep methods (LMMs) and Runge–Kutta (RK) methods.
David F. Griffiths, Desmond J. Higham
3. The Taylor Series Method
Abstract
An alternative is to use a more sophisticated recurrence relation at each step in order to achieve greater accuracy (for the same value of h) or a similar level of accuracy with a larger value of h (and, therefore, fewer steps).
David F. Griffiths, Desmond J. Higham
4. Linear Multistep Methods—I: Construction and Consistency
Abstract
The effectiveness of the family of TS(p) methods has been evident in the preceding chapter. For order p > 1, however, they suffer a serious disadvantage in that they require the right-hand side of the differential equation to be differentiated a number of times. This often rules out their use in real-world applications, which generally involve (large) systems of ODEs whose differentiation is impractical unless automated tools are used [23]. We look, therefore, for alternatives that do not require the use of second and higher derivatives of the solution.
David F. Griffiths, Desmond J. Higham
5. Linear Multistep Methods—II: Convergence and Zero-Stability
Abstract
Means of determining the coefficients in LMMs were described in Chapter 4 and criteria now need to be established to identify those methods that are practically useful. In this section we describe some of the behaviour that should be expected of methods (in general) and, in subsequent sections, indicate how this behaviour can be designed into LMMs.
David F. Griffiths, Desmond J. Higham
6. Linear Multistep Methods—III: Absolute Stability
Abstract
Thus, convergent methods generate numerical solutions that are arbitrarily close to the exact solution of the IVP provided that h is taken to be sufficiently small. Since non-convergent methods are of little practical use we shall henceforth assume that all LMMs used are convergent—they are consistent and zero-stable.
David F. Griffiths, Desmond J. Higham
7. Linear Multistep Methods—IV: Systems of ODEs
Abstract
In this chapter we describe the use of LMMs to solve systems of ODEs and show how the notion of absolute stability can be generalized to such problems. We begin with an example.
David F. Griffiths, Desmond J. Higham
8. Linear Multistep Methods—V: Solving Implicit Methods
Abstract
The discussion of absolute stability in previous chapters shows that it can be advantageous to use an implicit LMM—usually when the step size in an explicit method has to be chosen on grounds of stability rather than accuracy. One then has to compute the numerical solution at each step by solving a nonlinear system of algebraic equations.
David F. Griffiths, Desmond J. Higham
9. Runge–Kutta Method—I: Order Conditions
Abstract
Runge–Kutta (RK) methods are one-step methods composed of a number of stages. A weighted average of the slopes (f) of the solution computed at nearby points is used to determine the solution at t = t n+1 from that at t = t n . Euler’s method is the simplest such method and involves just one stage.
David F. Griffiths, Desmond J. Higham
10. Runge-Kutta Methods–II Absolute Stability
Abstract
The notion of absolute stability developed in Chapter 6 for LMMs is equally relevant to RK methods. Applying an RK method to the linear ODE x′(t) = λx(t) with ℜ(λ) < 0, absolute stability requires that x n → 0 as n → ∞.
David F. Griffiths, Desmond J. Higham
11. Adaptive Step Size Selection
Abstract
All the methods discussed thus far have been parameterized by the step size h. The number of steps required to integrate over a given interval [0, t f ] is proportional to 1/h and the accuracy of the results is proportional to h p , for a method of order p. Thus, halving h is expected to double1 the amount of computational effort while reducing the error by a factor of 2 p (more than an extra digit of accuracy if p > 3).
David F. Griffiths, Desmond J. Higham
12. Long-Term Dynamics
Abstract
There are many applications where one is concerned with the long-term behaviour of nonlinear ODEs. It is therefore of great interest to know whether this behaviour is accurately captured when they are solved by numerical methods.
David F. Griffiths, Desmond J. Higham
13. Modified Equations
Abstract
Thus far the emphasis in this book has been focused firmly on the solutions of IVPs and how well these are approximated by a variety of numerical methods. This attention is now shifted to the numerical method (primarily LMMs) and we ask whether the numerically computed values might be closer to the solution of a modified differential equation than they are to the solution of the original differential equation. At first sight this may appear to introduce an unnecessary level of complication, but we will see in this chapter (as well as those that follow on geometric integration) that constructing a new ODE that very accurately approximates the numerical method can provide important insights about our computations.
David F. Griffiths, Desmond J. Higham
14. Geometric Integration Part I—Invariants
Abstract
We judge a numerical method by its ability to “approximate” the ODE. It is perfectly natural to
– fix an initial condition,
– fix a time t f
and ask how closely the method can match x(t f ), perhaps in the limit h → 0. This led us, in earlier chapters, to the concepts of global error and order of convergence. However, there are other senses in which approximation quality may be studied. We have seen that absolute stability deals with long-time behaviour on linear ODEs, and we have also looked at simple long-time dynamics on nonlinear problems with fixed points. In this chapter and the next we look at another well-defined sense in which the ability of a numerical method to reproduce the behaviour of an ODE can be quantified—we consider ODEs with a conservative nature—that is, certain algebraic quantities remain constant (are conserved) along trajectories. This gives us a taste of a very active research area that has become known as geometric integration, a term that, to the best of our knowledge, was coined by Sanz-Serna in his review article [60]. The material in these two chapters borrows heavily from Hairer et al. [26] and Sanz-Serna and Calvo [61].
David F. Griffiths, Desmond J. Higham
15. Geometric Integration Part II—Hamiltonian Dynamics
Abstract
This chapter continues our study of geometric features of ODEs. We look at Hamiltonian problems, which possess the important property of symplecticness. As in the previous chapter our emphasis is on
– showing that methods must be carefully chosen if they are to possess the correct geometric property, and
– using the idea of modified equations to explain the qualitative behaviour of numerical methods.
David F. Griffiths, Desmond J. Higham
16. Stochastic Differential Equations
Abstract
Many mathematical modelling scenarios involve an inherent level of uncertainty. For example, rate constants in a chemical reaction model might be obtained experimentally, in which case they are subject to measurement errors. Or the simulation of an epidemic might require an educated guess for the initial number of infected individuals. More fundamentally, there may be microscopic effects that (a) we are not able or willing to account for directly, but (b) can be approximated stochastically. For example, the dynamics of a coin toss could, in principle, be simulated to high precision if we were prepared to measure initial conditions sufficiently accurately and take account of environmental effects, such as wind speed and air pressure. However, for most practical purposes it is perfectly adequate, and much more straightforward, to model the outcome of the coin toss using a random variable that is equally likely to take the value heads or tails. Stochastic models may also be used in an attempt to deal with ignorance. For example, in mathematical finance, there appears to be no universal “law of motion” for the movement of stock prices, but random models seem to fit well to real data.
David F. Griffiths, Desmond J. Higham
Backmatter
Metadaten
Titel
Numerical Methods for Ordinary Differential Equations
verfasst von
David F. Griffiths
Desmond J. Higham
Copyright-Jahr
2010
Verlag
Springer London
Electronic ISBN
978-0-85729-148-6
Print ISBN
978-0-85729-147-9
DOI
https://doi.org/10.1007/978-0-85729-148-6

Premium Partner