Skip to main content

2016 | Buch

Introduction to Scientific Computing and Data Analysis

insite
SUCHEN

Über dieses Buch

This textbook provides and introduction to numerical computing and its applications in science and engineering. The topics covered include those usually found in an introductory course, as well as those that arise in data analysis. This includes optimization and regression based methods using a singular value decomposition. The emphasis is on problem solving, and there are numerous exercises throughout the text concerning applications in engineering and science. The essential role of the mathematical theory underlying the methods is also considered, both for understanding how the method works, as well as how the error in the computation depends on the method being used. The MATLAB codes used to produce most of the figures and data tables in the text are available on the author’s website and SpringerLink.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction to Scientific Computing
Abstract
This chapter provides a brief introduction to the floating-point number system used in most scientific and engineering applications. A few examples are given in the next section illustrating some of the challenges using finite precision arithmetic, but it is worth quoting Donald Knuth to get things started. If you are unfamiliar with him, he was instrumental in the development of the analysis of algorithms, and is the creator of TeX. Anyway, here are the relevant quotes (Knuth [1997]:
Mark H. Holmes
Chapter 2. Solving A Nonlinear Equation
Abstract
In this chapter one of the more common mathematical problems is studied, which is to find the solution, or solutions, of an equation of the form f(x) = 0. To illustrate the situation, we begin with a few examples.
Mark H. Holmes
Chapter 3. Matrix Equations
Abstract
This chapter concentrates on solving the matrix equation Ax = b, and the chapter to follow investigates various ways to compute the eigenvalues of a matrix A. Together, they are central components of what is called numerical linear algebra. What will be evident from reading this material is the prominent role matrix factorizations play in the subject. To explain what this involves, given a matrix A, one factors it as A = BC or A = BCD. The factors, B, C, and D, are matrices with nice, easy to compute with, properties. The time-consuming computational step is finding the factorization. There are many useful factorizations, and a listing of some considered in this text can be found in the index.
Mark H. Holmes
Chapter 4. Eigenvalue Problems
Abstract
The problem considered in this chapter is: given an n × n matrix A, find the number(s) \(\lambda\) and nonzero vectors x that satisfy
$$\displaystyle{ \mathbf{A}\mathbf{x} =\lambda \mathbf{x}. }$$
(4.1)
This is an eigenvalue problem, where \(\lambda\) is an eigenvalue and x is an eigenvector. There are a couple of observations worth making about this problem. First, x = 0 is always a solution of (4.1), and so what is of interest are the nonzero solutions. Second, if x is a solution, then α x, for any number α, is also a solution.
Mark H. Holmes
Chapter 5. Interpolation
Abstract
The topic of this chapter is interpolation, which relates to passing a curve through a set of data points. To put this in context, extracting information from data is one of the central objectives in science and engineering and exactly what or how this is done depends on the particular setting. Two examples are shown in Figure 5.1. Figure 5.1(L) contains data obtained from measurements of high redshift type supernovae. As is often the case with computerized testing systems, there are many data points and there is some scatter in the values obtained.
Mark H. Holmes
Chapter 6. Numerical Integration
Abstract
The objective of this chapter is to derive and then test methods that can be used to evaluate the definite integral In most calculus textbooks the examples and problems dedicated to integration are not particularly complicated, although some require a clever combination of methods to carry out the integration. In the real world the situation is much worse. As an example, to find the deformation of an elastic body when compressed by a rigid punch it is necessary to evaluate (Gladwell [1980] Moreover, it is relatively easy to find integrals even worse than the one above. To illustrate, in the study of the emissions from a pulsar it is necessary to evaluate (Gwinn et al. [2012] where K 2 is the modified Bessel function. The point here is that effective numerical methods for evaluating integrals are needed, and our objective is to determine what they are.
Mark H. Holmes
Chapter 7. Initial Value Problems
Abstract
In this chapter we derive numerical methods to solve the first-order differential equation
$$\displaystyle{ \frac{dy} {dt} = f(t,y),\;\;\text{ for }\;0 <t, }$$
(7.1)
where
$$\displaystyle{ y(0) =\alpha. }$$
(7.2)
This is known as an initial value problem (IVP), and it consists of the differential equation (7.1) along with the initial condition in (7.2). Numerical methods for solving this problem are first derived for the case of when there is one differential equation. Afterwards, the methods are extended to problems involving multiple equations.
Mark H. Holmes
Chapter 8. Optimization
Abstract
The problem central to this chapter is easy to state: given a function F(v), find the point \(\mathbf{v}_{m} \in \mathbb{R}^{n}\) where F achieves its minimum value.
Mark H. Holmes
Chapter 9. Data Analysis
Abstract
In this chapter we consider a problem we have examined in earlier chapters, which is how to derive information from data. This was central to Chapter 5, when we derived interpolation formulas, and also in Chapter 8, where we investigated ways to use linear and nonlinear regression. In this chapter, four different situations are considered. The first three have a lot in common, and are examples illustrating the usefulness of the singular value decomposition (SVD) in data analysis. The SVD is explained in Section 4.​5 These three methods also make use of the regression material covered in Section 8.​2 The fourth method relates to what is sometimes called causal data, which means that there is an underlying mathematical model to explain the observed behavior, but it is necessary to fit the model to the data. This is similar to the regression problem, but in this case the model function comes from equations derived elsewhere, such as Newton’s laws of mechanics, or Maxwell’s equations of electrodynamics. This material will rely heavily on Section 5.​4.​1, which means cubic B-splines, and it uses the RK4 method, which is derived in Section 7.​5
Mark H. Holmes
Backmatter
Metadaten
Titel
Introduction to Scientific Computing and Data Analysis
verfasst von
Mark H. Holmes
Copyright-Jahr
2016
Electronic ISBN
978-3-319-30256-0
Print ISBN
978-3-319-30254-6
DOI
https://doi.org/10.1007/978-3-319-30256-0