Skip to main content

2021 | Buch

Numerical Methods and Optimization

Theory and Practice for Engineers

insite
SUCHEN

Über dieses Buch

This text, covering a very large span of numerical methods and optimization, is primarily aimed at advanced undergraduate and graduate students. A background in calculus and linear algebra are the only mathematical requirements. The abundance of advanced methods and practical applications will be attractive to scientists and researchers working in different branches of engineering. The reader is progressively introduced to general numerical methods and optimization algorithms in each chapter. Examples accompany the various methods and guide the students to a better understanding of the applications. The user is often provided with the opportunity to verify their results with complex programming code. Each chapter ends with graduated exercises which furnish the student with new cases to study as well as ideas for exam/homework problems for the instructor. A set of programs made in Matlab™ is available on the author’s personal website and presents both numerical and optimization methods.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Interpolation and Approximation
Abstract
This chapter contains generalities about the approximation by a polynomial or another function. Newton, Lagrange polynomials are detailed. In addition to a regular spacing of the domain, the interest or irregular spacing is emphasized with Chebyshev and Hermite polynomials in particular. The interpolation by cubic spline functions is analyzed and Bézier curves are exposed. All the methods are accompanied by detailed numerical examples.
Jean-Pierre Corriou
Chapter 2. Numerical Integration
Abstract
The numerical integration is first introduced by Newton and Cotes integration formulas, such as the trapezoidal rule or Simpson’s rule. The repeated integration, Romberg’s integration, and Richardson’s extrapolation are explained. Then, the interest of integration with irregularly spaced points is emphasized with use of different orthogonal polynomials, Legendre, Laguerre, Chebyshev, and Hermite. Finally, Gauss–Legendre quadrature is detailed. The methods are accompanied by numerical examples.
Jean-Pierre Corriou
Chapter 3. Equation Solving by Iterative Methods
Abstract
The purpose is solving a single equation. After the explanation of Graeffe’s, Bernoulli’s, and Bairstow’s methods designed for polynomials, a large range of iterative methods for any function are exposed, bisection, regula falsi, successive substitutions, Newton’s method, and derived methods such as secant, Wegstein’s, and Aitken’s methods. The homotopy method concludes this chapter. All the described methods are illustrated by a significant numerical example.
Jean-Pierre Corriou
Chapter 4. Numerical Operations on Matrices
Abstract
Numerical calculation makes an extensive use of matrices. Many general properties about matrices and their properties are first detailed. Linear transformations, eigenvalues properties, and use with Gershgorin and Cayley–Hamilton theorems, the power method are explained. Similar matrices, Hermitian matrices, matrix norms, condition number are detailed. Finally, Rutishauser’s, Householder’s, Francis’s reduction methods are examined. All these methods are accompanied by complete numerical examples.
Jean-Pierre Corriou
Chapter 5. Numerical Solution of Systems of Algebraic Equations
Abstract
The solution of systems of linear or nonlinear equations is searched for by many different techniques. First, linear systems considered under matrix form are solved by Gauss and Gauss–Jordan algorithms. Then, techniques for particular matrices are exposed, such as LDL T factorization, Cholesky decomposition, and singular value decomposition. The case of least squares for linear overdetermined systems, iterative techniques for large systems such as Jacobi, Gauss–Seidel, and the case of tridiagonal systems are treated in detail. The solution of nonlinear systems is given by Newton–Raphson’s method and optimization techniques are mentioned. These latter occupy specific chapter 9 in the optimization part of the book. All the methods are illustrated by significant numerical examples.
Jean-Pierre Corriou
Chapter 6. Numerical Integration of Ordinary Differential Equations
Abstract
The solution of systems of ordinary differential equations is demonstrated by many various techniques. General properties of ODEs are first introduced, and then the error problems are explained using Euler’s method. A large range of explicit, semi-implicit, and implicit Runge–Kutta methods of different orders are detailed. They are followed by multi-step methods such as Adams–Moulton, predictor–corrector techniques. The stability of integration methods is discussed. The particular cases of stiff systems and differential–algebraic systems are explained. Many numerical examples illustrate the different techniques.
Jean-Pierre Corriou
Chapter 7. Numerical Integration of Partial Differential Equations

A very large range of efficient techniques are exposed for solving partial differential equations. Various physical systems illustrate the different PDEs. The mathematical characterization of PDEs is explained. The method of characteristics is exposed by means of physical examples. The finite differences are very detailed with many different schemes and applications in heat and mass transfer and transport in 1D and 2D. Automatic finite differences and irregular grids are commented. Spectral methods, including Galerkin’s and collocations, radial basis functions, are explained with applications on ODEs and PDEs. A moving grid is detailed with application on a realistic chromatography. The finite volumes are detailed in 1D with application in heat and mass transfer. The finite elements are also explained with their foundations and algorithms, with many applications especially in heat transfer up to 3D. The boundary element method is treated in detail with application in heat transfer up to 2D. All these methods are explained mathematically and illustrated numerically with physical examples.

Jean-Pierre Corriou
Chapter 8. Analytical Methods for Optimization
Abstract
After some mathematical reminder, the analytical methods of optimization are exposed in the case of equality constraints with the Lagrangian and inequality constraints with Karush–Kuhn–Tucker parameters. The sensitivity analysis concludes the chapter. These fundamentals are essential for numerical solution of optimization problems. Many mathematical and numerical examples illustrate the different cases encountered.
Jean-Pierre Corriou
Chapter 9. Numerical Methods of Optimization
Abstract
The numerical methods of optimization start with optimizing functions of one variable, bisection, Fibonacci, and Newton. Then, functions of several variables occupy the main part, divided into methods of direct search and gradient methods. In the direct search, many methods are presented, simplex, Hooke and Jeeves, Powell, Rosenbrock, Nelder–Mead, Box complex, genetic algorithms with quasi-global optimization. Gradient methods are first explained from a general point of view for quadratic and non-quadratic functions, including the method of steepest descent, conjugate gradients, Newton–Raphson, quasi-Newton, Gauss–Newton, and Levenberg–Marquardt. Solving large systems is discussed. All these methods are illustrated by significant numerical examples.
Jean-Pierre Corriou
Chapter 10. Linear Programming
Abstract
Linear programming deals with the optimization of systems of linear equations subject to linear constraints. The problem is first exposed with the introduction of slack and artificial variables. The solution is treated by means of the simplex tableau in different cases. The theoretical solution is demonstrated as well as numerical examples. The duality is discussed mathematically and also explained by examples. Interior point methods are first introduced with Karmarkar’s projection method and algorithm, followed by the affine transformation. All algorithms are accompanied by numerical examples.
Jean-Pierre Corriou
Chapter 11. Quadratic Programming and Nonlinear Optimization
Abstract
Quadratic programming strictly deals with the optimization of a quadratic function subject to linear constraints, but it is here extended to nonquadratic functions. The solution of QP is given by the simplex method but also using the barrier method. It is followed by the nonlinear optimization by successive quadratic programming. SQP is explained by different algorithms. Examples systematically illustrate the different techniques.
Jean-Pierre Corriou
Chapter 12. Dynamic Optimization
Abstract
Dynamic optimization deals with problems where the solution depends on time or space. It is exposed under different angles. First, from a purely mathematical point of view, the problem is solved based on variational calculus. First order Euler conditions are demonstrated, and second order Legendre–Clebsch are mentioned. The same problem is discussed using Hamilton–Jacobi framework. Then, dynamic optimization in continuous time is treated in the framework of optimal control. Successively, Euler’s method, Hamilton–Jacobi, and Pontryagin’s maximum principle are exposed. Several detailed examples accompany the different techniques. Numerical issues with different solutions are explained. The continuous-time part is followed by the discrete-time part, i.e. dynamic programming. Bellman’s theory is explained both by backward and forward induction with clear numerical examples.
Jean-Pierre Corriou
Backmatter
Metadaten
Titel
Numerical Methods and Optimization
verfasst von
Jean-Pierre Corriou
Copyright-Jahr
2021
Electronic ISBN
978-3-030-89366-8
Print ISBN
978-3-030-89365-1
DOI
https://doi.org/10.1007/978-3-030-89366-8