Skip to main content

1997 | Buch

Entropy Optimization and Mathematical Programming

verfasst von: S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao

Verlag: Springer US

Buchreihe : International Series in Operations Research & Management Science

insite
SUCHEN

Über dieses Buch

Entropy optimization is a useful combination of classical engineering theory (entropy) with mathematical optimization. The resulting entropy optimization models have proved their usefulness with successful applications in areas such as image reconstruction, pattern recognition, statistical inference, queuing theory, spectral analysis, statistical mechanics, transportation planning, urban and regional planning, input-output analysis, portfolio investment, information analysis, and linear and nonlinear programming.
While entropy optimization has been used in different fields, a good number of applicable solution methods have been loosely constructed without sufficient mathematical treatment. A systematic presentation with proper mathematical treatment of this material is needed by practitioners and researchers alike in all application areas. The purpose of this book is to meet this need. Entropy Optimization and Mathematical Programming offers perspectives that meet the needs of diverse user communities so that the users can apply entropy optimization techniques with complete comfort and ease. With this consideration, the authors focus on the entropy optimization problems in finite dimensional Euclidean space such that only some basic familiarity with optimization is required of the reader.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction to Entropy and Entropy Optimization Principles
Abstract
This chapter provides a historical perspective of the concept of entropy, Shannon’s reasoning, and the axioms that justify the principles of entropy optimization, namely, the maximum entropy and minimum cross-entropy principles. The mathematical forms of various entropy optimization problems are also discussed along with references to the existing literature. The chapter consists of two sections. Section 1.1 introduces the concept of entropy, and Section 1.2 classifies different entropy optimization problems to be studied in this book.
S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao
Chapter 2. Entropy Optimization Models
Abstract
Entropy optimization models have been successfully applied to practical problems in many scientific and engineering disciplines. As noted in Chapter 1, those disciplines include statistical mechanics, thermodynamics, statistical parameter estimation and inference, economics, business and finance, nonlinear spectral analysis, pattern recognition, transportation, urban and regional planning, queueing theory, and linear and nonlinear programming. Included in this book are example applications in the areas of (1) queueing theory, (2) transportation planning, (3) input-output analysis, (4) regional planning, (5) investment portfolio optimization, and (6) image reconstruction. They are discussed in six sections, from Sections 2.1 through 2.6.
S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao
Chapter 3. Entropy Optimization Methods: Linear Case
Abstract
Let x ≡ (x 1,…,x n ) T 0 be a nonnegative n-dimensional column vector and p ≡ (p 1,…,p n ) T > 0 be a positive n-dimensional column vector. With the convention of 0 ln 0 = 0, we define the quantity \(\sum _{j = 1}^n{x_j}\ln ({x_j}/{p_j})\) to be the cross-entropy of x with respect to p, in a general sense. Note that when x and p are both probability distributions, i.e., \(\sum _{j = 1}^n{x_j} = \sum _{j = 1}^n{p_j} = 1\) this quantity becomes the commonly defined cross-entropy between the two probability distributions (see Chapter 1).
S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao
Chapter 4. Entropy Optimization Methods: General Convex Case
Abstract
Let q = (q 1,…, q n ) T 0 and p ≡ (p 1,…, p n ) T > 0 be two probability distribution functions. With the convention of 0 ln 0 = 0, in Chapter 1, we defined the quantity \(\sum _{j = 1}^n{q_j}\ln ({q_j}/{p_j})\) to be the cross-entropy of q with respect to p. In this chapter, we study three classes of minimum cross-entropy problems, namely,
1)
minimizing cross-entropy subject to quadratic constraints,
 
2)
minimizing cross-entropy subject to entropic constraints, and
 
3)
minimizing cross-entropy subject to convex constraints.
 
S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao
Chapter 5. Entropic Perturbation Approach to Mathematical Programming
Abstract
The barrier and penalty function methods for solving mathematical programming problems have been widely used for both theoretical and computational purposes. In a penalty approach, any point outside of the feasible region is assigned a penalty while, in a barrier approach, those feasible solutions near the boundary of the feasible region are subject to a penalty. Both approaches are designed to prevent the search process for an optimal solution from wondering away from the feasible region. They can be considered as an objective-perturbation approach. This chapter studies the objective-perturbation approach by using the entropic function, \(\sum {_j x_j \ln x_j } \) for solving four classes of problems, namely, linear programming problems in Karmarkar’s form, linear programming programs in standard form, convex quadratic programming problems, and linear and convex quadratic semi-infinite programming problems.
S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao
Chapter 6. L p -Norm Perturbation Approach: A Generalization of Entropic Perturbation
Abstract
Solving a linear or nonlinear program by perturbing its primal objective function with a barrier or penalty function has attracted much attention recently in developing interior-point methods [3, 17, 4, 13, 18]. However, the idea of perturbing the feasible region has not been fully explored. This chapter focuses on this idea and discusses a particular approach involving the l p -norm of a vector measure of constraint violation. Three topics will be discussed in this chapter:
(i)
perturbing the dual feasible region of a standard-form linear program,
 
(ii)
perturbing the primal feasible region of a linear program with inequality constraints, and
 
(iii)
perturbing the dual feasible region of a convex quadratic program.
 
S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao
Chapter 7. Extensions and Related Results
Abstract
This chapter discusses extensions of the methods developed in previous chapters and discusses some closely related subjects, including
(i)
Entropy optimization problems with a finite number of constraints but a countably infinite number of variables,
 
(ii)
A relationship between entropy optimization and Bayesian statistical estimation,
 
(iii)
Entropic regularization method for solving min-max problems, and
 
(iv)
Entropic regularization method for solving semi-infinite min-max problems.
 
S.-C. Fang, J. R. Rajasekera, H.-S. J. Tsao
Backmatter
Metadaten
Titel
Entropy Optimization and Mathematical Programming
verfasst von
S.-C. Fang
J. R. Rajasekera
H.-S. J. Tsao
Copyright-Jahr
1997
Verlag
Springer US
Electronic ISBN
978-1-4615-6131-6
Print ISBN
978-1-4613-7810-5
DOI
https://doi.org/10.1007/978-1-4615-6131-6