Skip to main content
Top

2022 | Book

Numerical Analysis: A Graduate Course

insite
SEARCH

About this book

This book aims to introduce graduate students to the many applications of numerical computation, explaining in detail both how and why the included methods work in practice. The text addresses numerical analysis as a middle ground between practice and theory, addressing both the abstract mathematical analysis and applied computation and programming models instrumental to the field. While the text uses pseudocode, Matlab and Julia codes are available online for students to use, and to demonstrate implementation techniques. The textbook also emphasizes multivariate problems alongside single-variable problems and deals with topics in randomness, including stochastic differential equations and randomized algorithms, and topics in optimization and approximation relevant to machine learning. Ultimately, it seeks to clarify issues in numerical analysis in the context of applications, and presenting accessible methods to students in mathematics and data science.

Table of Contents

Frontmatter
Chapter 1. Basics of Numerical Computation
Abstract
There was a time when computers were people [110]. But since the late 1950s with the arrival and development of electronic digital computers, people who did the calculations were being replaced by machines. The history of computing in NASA revealed in Hidden Figures [233] gives an example of this. The machines have advanced enormously since that time. Gordon Moore, one of the founders of Intel, came up with his famous “law” in 1965 that the number of transistors on a single “chip” of silicon doubled every year [179]. Since then the time for doubling has stretched out from 1 year to over 2 years, but the exponential growth of the capabilities of computer hardware has not yet stopped.
David E. Stewart
Chapter 2. Computing with Matrices and Vectors

This chapter is about numerical linear algebra, that is, matrix computations. Numerical computations with matrices (and vectors) are central to a great many algorithms, so there has been a great deal of work on this topic. We can only scratch the surface here, but you should feel free to use this as the starting point for finding the methods and analysis most appropriate for your application(s).

David E. Stewart
Chapter 3. Solving nonlinear equations
Abstract
Unlike solving linear equations, solving nonlinear equations cannot be done by a “direct” method except in special circumstances, such as solving a quadratic equation in one unknown.
David E. Stewart
Chapter 4. Approximations and Interpolation
Abstract
The representation and approximation of functions is central to the practice of numerical analysis and scientific computation. The oldest of these is Taylor series developed by James Gregory and later Brook Taylor. However, the development of Taylor series requires knowledge of derivatives of arbitrarily high order.
David E. Stewart
Chapter 5. Integration and Differentiation
Abstract
For functions of one, two, or three variables, numerical integration is often done via interpolation. Error estimates for interpolation can be used to obtain error estimates for integration. In high dimensions, these approaches lose value as the amount of data needed to obtain a reasonable interpolant becomes exorbitant. But in low dimensions, and especially in dimension one, these approaches work very well.
David E. Stewart
Chapter 6. Differential Equations

Differential equations provide a language for representing many processes in nature, technology, and society. Ordinary differential equations have one independent variable on which all others depend. Usually, this independent variable is time, although it may be position along a rod or string. Typically, in these situations, the starting position or state is known at a particular time, and we wish to forecast how that will change with time.

David E. Stewart
Chapter 7. Randomness
Abstract
Probabilities have been used since the ninth century by Arab scholars studying cryptography [34].
David E. Stewart
Chapter 8. Optimization
Abstract
Optimization is the task of making the most of what you have. Mathematically, this is turned into finding either the maximum or minimum of a function \(f:A\rightarrow \mathbb {R}\) where \(A\subseteq \mathbb {R}^{n}\) is the feasible set, the set of allowed choices. Since maximizing \(f(\boldsymbol{x})\) over \(\boldsymbol{x}\in A\) is equivalent to minimizing \(-f(\boldsymbol{x})\) over \(\boldsymbol{x}\in A\), by convention we usually consider minimizing \(f(\boldsymbol{x})\) over a set \(\boldsymbol{x}\in A\). After reviewing the necessary and sufficient conditions for optimization, we turn to numerical methods. Of particular importance is the distinction between convex and non-convex optimization problems.
David E. Stewart
Backmatter
Metadata
Title
Numerical Analysis: A Graduate Course
Author
David E. Stewart
Copyright Year
2022
Electronic ISBN
978-3-031-08121-7
Print ISBN
978-3-031-08120-0
DOI
https://doi.org/10.1007/978-3-031-08121-7

Premium Partner