Skip to main content
main-content

Über dieses Buch

A text surveying perturbation techniques and sensitivity analysis of linear systems is an ambitious undertaking, considering the lack of basic comprehensive texts on the subject. A wide-ranging and global coverage of the topic is as yet missing, despite the existence of numerous monographs dealing with specific topics but generally of use to only a narrow category of people. In fact, most works approach this subject from the numerical analysis point of view. Indeed, researchers in this field have been most concerned with this topic, although engineers and scholars in all fields may find it equally interesting. One can state, without great exaggeration, that a great deal of engineering work is devoted to testing systems' sensitivity to changes in design parameters. As a rule, high-sensitivity elements are those which should be designed with utmost care. On the other hand, as the mathematical modelling serving for the design process is usually idealized and often inaccurately formulated, some unforeseen alterations may cause the system to behave in a slightly different manner. Sensitivity analysis can help the engineer innovate ways to minimize such system discrepancy, since it starts from the assumption of such a discrepancy between the ideal and the actual system.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Perturbation of Linear Equations

Abstract
This chapter will discuss the behaviour of the system of linear simultaneous equations
$$ Ax = b $$
when the matrix A and the vector b are subjected to small order perturbations ∆A and ∆b respectively. The problem then becomes
$$ \left( {A + \Delta A} \right)\left( {x + \Delta x} \right) = b + \Delta b $$
and we are mainly concerned with studying the deviation ∆x of the solution with the perturbation. Such an exercise is called sensitivity analysis, for the extent of the deviation ∆x relative to ∆A and ∆b defines the sensitivity of the system. A highly sensitive system is roughly one where a relatively large deviation ∆x is incurred by small perturbations ∆A and ∆b. As we shall see, highly sensitive systems are generally to be avoided in practice. They are referred to as ill-conditioned, for a highly sensitive system would yield large variations in the results for only small uncertainties in the data. To clarify this fact, let us study the behaviour of the system of equations
$$ x + y = 2 $$
$$ 0.49x + 0.51y = 1 $$
— representing a pair of straight lines intersecting at x = 1, y = 1 — when a small term ε is added to the equations. Surprisingly enough, the set of equations obtained, namely
$$ x + y = 2 + \varepsilon $$
$$ 0.49x + 0.51y = 1 $$
represents a pair of straight lines meeting at a point (x, y) given by x = 1 + 25.5ε; y = 1 − 24.5ε, and rather distant from x = 1, y = 1. Here, a small change of order ε in the equations has produced a change of 25ε in the solution. This system has definitely a high sensitivity to perturbations in its matrix A and vector b.
Assem Deif

Chapter 2. Methods of Interval Analysis

Abstract
Three types of errors are encountered in numerical analysis, namely:
1.
Round-off errors, arising when numbers are rounded to fit a certain precision arithmetic; e.g. the case where 1/6 = 0.1666 ... is approximated to 0.167 on a three-digit machine.
 
2.
Truncation errors, resulting when convergent series are truncated down to a number of terms, e.g. the case where π = 3.141 592 65 ... is approximated by π= 3.14.
 
3.
Data errors, associated with the specific physical model under study. They represent a parameter’s uncertainties when it is determined through experimental measurements.
 
Assem Deif

Chapter 3. Iterative Systems

Abstract
Whereas direct methods for solving linear equations yield results after a specified amount of computation, iterative methods, in contrast, ameliorate on an approximate solution until it meets a prescribed level of accuracy. In theory, direct methods should give exact solutions in the absence of round-off errors. Yet in practice, due to conditioning problems, this is never the case. For small matrices, direct methods are recommended. For the large ones, direct methods involve very large operations without real need, since the matrices are usually sparse.
Assem Deif

Chapter 4. The Least-Squares Problem

Abstract
The set of linear simultaneous equations Ax = b, Am×n, has either a unique solution for x, more than one solution for x or no solution at all. For x to be unique, A is necessarily nonsingular and x is expressed as x = A−1 b (m = n). The situation in which there is more than one solution occurs when b can be expressed linearly in some few column vectors of A having rank equal to r(A) < n. The equations are said to be consistent yet indeterminate. The solution comes as x = A i b, where A i is some generalized inverse of A satisfying AA i A = A. A generalized inverse A i satisfying the latter condition can be easily suggested (Bellman 1970, p. 105) as
$$ A^i = P\left[ {\begin{array}{*{20}c}{I_r }\\Y\\\end{array}\;\begin{array}{*{20}c}X\\Z\\\end{array}} \right]R $$
where R and P are respectively elementary row and column operations which bring A to the canonical form, namely
$$ RAP = \left[ {\begin{array}{*{20}c}{I_r }\\0\\\end{array}\;\begin{array}{*{20}c}0\\0\\\end{array}} \right] $$
while X,Y and Z are arbitrary. x is therefore given by
$$ x = {A^t}b + (I - {A^t}A)c $$
where A t is the determinate part of A i and c is an arbitrary vector accounting for X, Y and Z. To see how this equivalence follows we note that A(I − A t A)= A(I − A i A)= 0 and that AA t b = b, so that premultiplying x by A yields b on the right hand side as our starting assumption. But the vectors of (I − A t A) lie in the null space of A, and a linear combination of them would represent the homogeneous solution.
Assem Deif

Chapter 5. Sensitivity in Linear Programming

Abstract
Whilst the theory of linear equations is concerned with solving the equations Ax = b and the methods involved therein, linear programming is used to study the set of linear inequalities Axb. These latter inequalities define the set X = {x: Axb, x ≧ 0} in n ; n = dim x; X being a convex set with a polyhedral shape. Linear programming’s major concern becomes then the selection among the vertices of X of the one that would either maximize or minimize the linear function
$$ \sum\limits_{i = 1}^n {{c_i}{x_i}} $$
Mathematically, the problem of linear programming is formulated as follows
$$ \mathop {\max }\limits_{{x_1},...,{x_n}} {c_1}{x_1} + {c_2}{x_2} + ... + {c_n}{x_n} $$
subject to the restrictions
$$ {a_{11}}{x_1} + ... + {a_{1n}}{x_n} \leqq {b_1} $$
$$ {a_{m1}}{x_1} + ... + {a_{mn}}{x_n} \leqq {b_m} $$
With
$$ {x_1},{x_2},...,{x_n} \geqq 0 $$
The coefficients c1, ... , c n are called the cost coefficients. The elements a ij are sometimes called the input-output coefficients; they form the so-called technological matrix. The different values of b define a bound on the available resources that set a limit on production. This nomenclature originated from the practical applications of linear programming The reader is referred to Gass (1969) for an account on these applications.
Assem Deif

Backmatter

Weitere Informationen