Skip to main content
Top

2014 | Book

Traces and Emergence of Nonlinear Programming

Editors: Giorgio Giorgi, Tinne Hoff Kjeldsen

Publisher: Springer Basel

insite
SEARCH

About this book

The book contains reproductions of the most important papers that gave birth to the first developments in nonlinear programming. Of particular interest is W. Karush's often quoted Master Thesis, which is published for the first time. The anthology includes an extensive preliminary chapter, where the editors trace out the history of mathematical programming, with special reference to linear and nonlinear programming.​

Table of Contents

Frontmatter
A Historical View of Nonlinear Programming: Traces and Emergence
Abstract
The historical view we propose in this introductory chapter will point out some of the technical difficulties of mathematical problems related to nonlinear programming and some features of the economic and social context (also military) that favored its rootedness in the years of the Second World War and the years immediately following the war. We recall some of the main definitions and basic results of mathematical programming and shortly address the “prehistory” of nonlinear programming. The main part of the chapter deals with the first ideas and developments of linear programming, first in the USSR and then in the USA and with the fundamental researches ofW. Karush, Fritz John, H.W. Kuhn and A.W. Tucker which are analyzed and discussed with respect to their mathematical and historical features.
Giorgio Giorgi, Tinne Hoff Kjeldsen
A Gradient Method For Approximating Saddle Points and Constrained Maxima
Abstract
In the following, X and Y will be vectors with components Xi, Yj. By X ≥ 0 will be meant X ≥ 0 for all i. Let g(X), fj(X) (j=1, •••) be functions with suitable differentiability properties, where fj(X)≥0 for all X, and define \( {\rm F}(\rm X, Y)=g(X)+{\sum_{j=1}^{m}} Y_{j}\Big\{1-[{f_{j}}(X)]^{l+\eta} \Big\}\).
Kenneth J. Arrow, Leonid Hurwicz
Reduction of Constrained Maxima to Saddle-Point Problems
Abstract
The usual applications of the method of Lagrangian multipliers, used in locating constrained extrema (say maxima), involve the setting up of the Lagrangian expression,
$$ \phi(x,y)=f(x)+y^{\prime}g(x), $$
where f(x) is being (say) maximized with respect to the (vector) variable x = {x1, • • •, xN}, subject to the constraint g(x)= O, where g(x) maps the points of the N-dimenaional x-space into an .M-dimensional space, and y ={y1, • • •, yM} is the Lagrange multiplier (vector).
Kenneth J. Arrow, Leonid Hurwicz
Constraint Qualifications in Maximization Problems
Abstract
Many problems arising in logistics and in the application of mathematics to industrial planning are in the form of constrained maximizations with nonlinear maxirnands or constraint functions or both. Thus a depot facing random demands for several items may wish to place orders for each in such a way as to maximize the expected number of demands which are fulfilled; the total of orders placed is limited by a budget constraint. In this case, the maximand is certainly nonlinear. The constraint would also be nonlinear if, for example, the marginal cost of storage of the goods were increasing. Practical methods for solving such problems in nonlinear programming almost invariably depends on some use of Lagrange multipliers, either by direct solution of the resulting system of equations or by a gradient method of successive approximations (see [5], Part II). This article discusses a part of the sufficient conditions for the validity of the multiplier method.
Kenneth J. Arrow, Leonid Hurwicz, Hirofumi Uzawa
Normality and Abnormality in the Calculus of Variations
Abstract
Within the past few years a number of papers concerning the problem of Bolza in the calculus of variations have been published which make it possible to carry through the theory of this problem with much simplified assumptions concerning what is called the normality of the minimizing arc. I refer especially to papers by Graves [8],† Hestenes [11, 14, 16], Reid [15], and Morse [13]. These papers and others are also important because they bring the theory of problems of the calculus of variations with variable end points to a stage comparable with that already attained for the more special case in which the end points are fixed.
Gilbert A. Bliss
A Theorem of Fritz John in Mathematical Programming
Abstract
A typical formulation of the mathematical programming problem is
$$ \rm {Maximize \;f(x) \;subject \; to \;{x \varepsilon E_{n}, g(x)\geqq 0.}} $$
(2.1)
Richard W. Cottle
On Conjugate Convex Functions
Abstract
Since the classical work of Minkowski and Jensen it is well known that many of the inequalities used in analysis may be considered as consequences of the convexity of certain functions. In several of these inequalities pairs of "conjugate" functions occur, for instance pairs of powers with exponents a and a related by 1/a + 1/a = 1. A more general example is the pair of positively homogeneous convex functions defined by Minkowski and known as the distance (or gauge) function and the function of support of a convex body. The purpose of the present paper is to explain the general (by the way rather elementary) idea underlying this correspondence.
Werner Fenchel
Programming in Linear Spaces
Abstract
The present study has its origin in problems of optimal resource allocation, especially those related to the possibilities of a price mechanism. While for some purposes Pareto-optimality might be the more relevant concept, we have confined ourselves here to the case where by " optimal" is meant " efficient" resource allocation.
Leonid Hurwicz
Extremum Problems with Inequalities as Subsidiary Conditions
Abstract
This paper deals with an extension of Lagrange’s multiplier rule to the case, where the subsidiary conditions are inequalities instead of equations. Only extrema of differentiable functions of a finite number of variables will be considered. There may however be an infinite number of inequalities prescribed. Lagrange’s rule for the situation considered here differs from the ordinary one, in that the multipliers may always be assumed to be positive. This makes it possible to obtain sufficient conditions for the occurence or a minimum in terms of the first derivatives only.
Fritz John
Minima of Functions of Several Variables with Inequalities as Side Conditions
Abstract
The problem of determining necessary conditions and sufficient conditions for a relative minimum of a function \( f({x_1},{x_2},....,{x_n})\) in the class of points \( x = ({x_1},{x_2},....,{x_n})\) Satisfying the equations \( \rm {g_{\alpha}(X)= 0 (\alpha = 1, 2,....,m),} \) where the functions f and gα have continuous derivatives of at least the second order, has been satisfactorily treated [1]*. This paper proposes to take up the corresponding problem in the class of points x satisfying the inequalities \( \begin{array}{clcclclclcl}\rm {g_{\alpha}(x)\geqq 0} & & & & & & \rm{\alpha = 1,2,...,m}\end{array} \) where m may be less than, equal to, or greater than n.
William Karush
Nonlinear Programming
Abstract
Linear programming deals with problems such as (see [ 4], [ 5]): to maximize a linear function
$$ \rm g{x}\equiv \sum {c_{i}x_{i}} \; \rm {of} \; n \;\rm{real \; variables} \; x_{1},...,x_{n} $$
(forming a vector x) constrained by m + n linear inequalities.
Harold W. Kuhn, Albert W. Tucker
Second Order Conditions for Constrained Minima
Abstract
This paper establishes two sets of "second order" conditions-one which is necessary, the other which is sufficient-in order that a vector x* be a local minimum to the constrained optimization problem: minimize f(x) subject to the constraints \( g_{i}(x)\geqq 0,i=1,\cdots ,m,\; \rm{and} \; h_{i}(x)=0,j=1,\cdots,p, \) where the problem functions are twice continuously differentiable. The necessary conditions extend the well-known results, obtained with Lagrange multipliers, which apply to equality constrained optimization problems, and the Kuhn-Tucker conditions, which apply to mixed inequality and equality problems when the problem functions are required only to have continuous first derivatives. The sufficient conditions extend similar conditions which have been developed only for equality constrained problems. Examples of the applications of these sets of conditions are given.
Garth P. McCormick
An Indirect Sufficiency Proof for the Problem of Lagrange with Differential Inequalities as Added Side Conditions
Abstract
The problem to be considered here consists in finding in a class of arcs \( C:\rm {y}^{i}(x)(i=1,\cdots ,n;x^{1}\leqq \; x \; \leqq x^{2})\) joining two fixed points and satisfying a set of differential inequalities and equations of the form
$$ \begin{array}{clclclcl}{\phi^{\beta}(x,y,\dot{y}\geqq 0),} & {{\Psi}^{p}(x,y,\dot{y})= 0}\end{array} $$
that one which minimizes the integral
$$ I(c)={\int^{x^{2}}_{x^{1}}} \; f(x,y,\dot{y})dx. $$
Louis L. Pennisi
Lagrange Multipliers Revisited
Abstract
The present paper was inspired by the work of Kuhn and Tucker [1]. These authors transformed a certain class of constrained maximum problems into equivalent saddle value (minimax) problems. Their work seems to hinge on the consideration of still a third type of problem. A very simple but illustrative form of this problem is the following: let \( x\;\epsilon \) positive orthant of some finite dimensional Euclidean space, and let f and g be real valued functions of x with the property that whenever \( f \geq 0, \)then also \( g \geq 0; \) under what conditions can one then conclude that 3 a non-negative constant u such that uf \( \leq g; \; for \; \underline{all} \;x \geq 0 ?\)
Morton Slater
The Kuhn-Tucker Theorem in Concave Programming
Abstract
In order to solve problems of constrained extrema, it is customary in the calculus to use the method of the Lagrangian multiplier. Let us, for example, consider a problem: maximize f(x1,•••, xn) subject to the restrictions g(x1,•••, xn)=0 (k=1, • ••,m).
Hirofumi Uzawa
The Problem of Lagrange with Differential Inequalities as Added Side Conditions
Abstract
An equivalent problem is introduced in section 2 or this paper by considering functions \( Z_{B}(x) \)such that the equations
$$ \phi_{\beta}(x,\; y \;, y^{1}=z_{\beta}^{{1}^{2}}) $$
hold.
Frederick Albert Valentine
On Linear Inequalities
Abstract
Linear inequalities were studied with some degree of generality at least as early as the time of Fourier (1824). However the first significant contribution to their theory was made by Minkowski in his Geometrie der Zalzlen in 1896. Since that time many papers have appeared in Europe, America, and Japan which have to do more or less directly with the subject. But some of these have been published in places unexpected or not easily accessible, and no general survey has appeared which attempts to take account of all of them.
Lloyd L. Dines, N. H. McCoy
Nonlinear Programming: A Historical View
Abstract
Secondly, in order to discuss these influences in a precise context, a few key results will be stated and "proved". This will be done in an almost selfcontained manner in the spirit of the call for this symposium which announced that the lectures would be pedagogical. The definitions and statements should help to set the stage for some of the papers to follow by providing a formal framework. In addition, these statements will allow the comparison of the results of various mathematicians who made early contributions to nonlinear programming. This will also give the pleasant opportunity to rewrite some history and give w. Karush his proper place in the development.
Harold W. Kuhn
Recherches Sur La Méthode de Maximis et Minimis.
Abstract
Les Geometres savent depuis longtemps que lorsque Ia premiere differentielle d'une variable quelconque disparait sans que Ia seconde disparaisse en meme temps, elle devient toujours un maximum ou un minimum; et en particulier elle est un maximum, si sa differentielle seconde est negative, et un minimum, si cette differentielle est positive.
Miscellanea Taurinensia
Metadata
Title
Traces and Emergence of Nonlinear Programming
Editors
Giorgio Giorgi
Tinne Hoff Kjeldsen
Copyright Year
2014
Publisher
Springer Basel
Electronic ISBN
978-3-0348-0439-4
Print ISBN
978-3-0348-0438-7
DOI
https://doi.org/10.1007/978-3-0348-0439-4

Premium Partner