main-content

## Über dieses Buch

The 9th Belgian-French-German Conference on Optimization has been held in Namur (Belgium) on September 7-11, 1998. This volume is a collection of papers presented at this Conference. Originally, this Conference was a French-German Conference but this year, in accordance with the organizers' wishes, a third country, Belgium, has joined the founding members of the Conference. Hence the name: Belgian­ French-German Conference on Optimization. Since the very beginning, the purpose of these Conferences has been to bring together researchers working in the area of Optimization and partic­ ularly to encourage young researchers to present their work. Most of the participants come from the organizing countries. However the general ten­ dancy is to invite outside researchers to attend the meeting. So this year, among the 101 participants at this Conference, twenty researchers came from other countries. The general theme of the Conference is everything that concerns the area of Optimization without specification of particular topics. So theoretical as­ pects of Optimization, in addition to applications and algorithms of Opti­ mization, will be developed. However, and this point was very important for the organizers, the Conference must retain its convivial character. No more than two parallel sessions are organized. This would allow useful contacts between researchers to be promoted. The editors express their sincere thanks to all those who took part in this Conference. Their invaluable discussions have made this volume possible.

## Inhaltsverzeichnis

### Linearization Method for Solving Equilibrium Programming Problems

In this paper, we study the canonical form of the equilibrium programming problem. For solving it, we consider the linearization method and we prove its convergence.

Anatoly Antipin

### The heavy ball with friction dynamical system for convex constrained minimization problems

The “heavy ball with friction” dynamical system $$\ddot{u} + \gamma \dot{u} + \nabla \Phi (u) = 0$$ is a non-linear oscillator with damping (γ > 0). In [2], Alvarez proved that when H is a real Hilbert space and Ф : H → ℝ is a smooth convex function whose minimal value is achieved, then each trajectory t → u (t) of this system weakly converges towards a minimizer of Ф. We prove a similar result in the convex constrained case by considering the corresponding gradient-projection dynamical system $$\ddot{u} + \gamma \dot{u} + u - pro{{j}_{C}}(u - \mu \nabla \Phi (u)) = 0,$$, where C is a closed convex subset of H. This result holds when H is a possibly infinite dimensional space, and extends, by using different technics, previous results by Antipin [1].

H. Attouch, F. Alvarez

### Active Set Strategy for Constrained Optimal Control Problems: the Finite Dimensional Case

We consider control constrained and state constrained optimal control problems governed by elliptic partial differential equations, once they have been discretized. We propose and analyze an algorithm for efficient solution of these finite dimensional problems. It is based on an active set strategy involving primal as well as dual variables and is suggested by a generalized Moreau-Yosida regularization of the control (or state) constraint. Sufficient conditions for convergence in finitely many iterations are given. At last we present numerical examples and discuss the role of the strict complementarity condition.

Maïtine Bergounioux, Karl Kunisch

### Vector Variational Principles Towards Asymptotically Well Behaved Vector Convex Functions

The aim of this paper is to present and analyse several kinds of ε— efficiency related to some metrically consistent vector variational principles as a converse of the asymptotical well behaviour of the vector convex functions.

Serban Bolintinéanu

### Tangential Regularity of Lipschitz Epigraphic Set-valued Mappings

A set-valued mapping M from a topological vector space E into a normed vector space F is tangentially regular at a point $$(\bar{x},\bar{y})$$ in its graph gph M if the Clarke tangent cone to gph M at $$(\bar{x},\bar{y})$$ is equal to the Bouligand contingent cone to gph M at $$(\bar{x},\bar{y})$$. Recently, after the work of Burke, Ferris and Qian on directional regularity of distance functions associated with subsets, we characterized in several cases the tangential regularity of a set-valued mapping M as the directional regularity of the scalar function Δ M defined by Δ M (x, y) := d(y, M(x)). In this paper we characterize this tangential regularity, for a new class of set-valued mappings, as the directional regularity of Δ M in a weak sense. We also show that the images of a set-valued mapping are tangentially regular whenever its graph is tangentially regular.

M. Bounkhel, L. Thibault

### Relaxed Assumptions for Iteration Methods

We consider a general inexact iterative method for finding a fixed point of a nonexpansive mapping on a real Hilbert space. Proofs of convergence of sequences generated by such a method generally require at least that the error term goes to zero. The aim of the present paper is to weaken this nonrealistic theoretical assumption. The obtained result is applied to the proximal point algorithm.

Myrana Brohé, Patricia Tossings

### Approximation Methods in Equilibrium and Fixed Point Theory Proximally nondegenerate sets

The aim of this paper is to show how approximation methods allow to extend the Equilibrium and Fixed Point Theory to the nonsmooth and nonconvex setting. Our approach relies heavily on the use of the distance function d M to a set M. We show that, for our purpose, Clarke’s subdifferential ∂d M is too big, and that the appropriate notion is the upper limit $$\partial + {{d}_{M}}{{(\bar{x})}^{{\underline{\underline {def}} }}}\lim {{\sup }_{{x \to \bar{x},x \notin M}}}\partial {{d}_{M}}(x) .$$

Bernard Cornet, Marc-Olivier Czarnecki

### Eigenvalue Localization for Multivalued Operators

Let (H, 〈•,•〉) be a Hilbert space, and F : H → H be an operator with closed convex values. Denote by σ(F) the set of all (real) eigenvalues of F. As shown in this note, if F satisfies a so-called “upper-normality” assumption, then it is possible to derive a simple variational formula for the supremum of σ(F). Lowernormality of F yields, of course, an analogous formula for the infimum of σ(F)

Rafael Correa, Alberto Seeger

### Arrow-Barankin-Blackwell Theorems and Related Results in Cone Duality: A Survey

We attempt a brief survey on the cone duality and on the density theorems of Arrow-Barankin&Blackwell’s type. Concerning the latter aspect we show the equivalence of two recent and ostensibly different results. We follow a unified approach which provides in particular a simple way of surveying these results and their proofs.

Aris Daniilidis

### Formulating and Solving Nonlinear Programs as Mixed Complementarity Problems

We consider a primal-dual approach to solve nonlinear programming problems within the AMPL modeling language, via a mixed complementarity formulation. The modeling language supplies the first order and second order derivative information of the Lagrangian function of the nonlinear problem using automatic differentiation. The PATH solver finds the solution of the first order conditions which are generated automatically from this derivative information. In addition, the link incorporates the objective function into a new merit function for the PATH solver to improve the capability of the complementarity algorithm for finding optimal solutions of the nonlinear program. We test the new solver on various test suites from the literature and compare with other available nonlinear programming solvers.

Michael C. Ferris, Krung Sinapiromsaran

### Convergence Analysis of a Perturbed Version of the θ — Scheme of Glowinski-Le Tallec

Many problems of convex programming can be reduced to that of finding a zero of the sum of two maximal monotone operators. Different splitting methods have been proposed to solve this problem, for example the Forward-Backward scheme, and the θ-scheme.In this paper, we consider the θ-scheme. Our purpose is to give a convergence result for a perturbed version of it. Then, we consider a convex programming problem and we present some classes of penalty functions associated to this problem which could be used to apply the perturbed θ-scheme to it.

Sylvianne Haubruge, Van Hien Nguyen, Jean-Jacques Strodiot

### Qualitative Stability of Convex Programs with Probabilistic Constraints

We consider convex stochastic optimization problems with probabilistic constraints which are defined by so-called r-concave probability measures. Since the true measure is unknown in general, the problem is usually solved on the basis of estimated approximations, hence the issue of perturbation analysis arises in a natural way. For the solution set mapping and for the optimal value function, stability results are derived. In order to include the important class of empirical estimators, the perturbations are allowed to be arbitrary in the space of probability measures (in contrast to the convexity property of the original measure). All assumptions relate to the original problem.

René Henrion

### An Approximation Framework for Infinite Horizon Optimization Problems in a Mathematical Programming Setting

Dynamic optimization problems, including optimal control problems, have typically relied on the solution techniques of dynamic programming, involving the sequential solution of certain optimality equations. However, many problems cannot be handled this way, due to complex constraints, a continuous state space, and other complicating factors. When recast as mathematical programs relying on the powerful tools of optimization, especially duality, and decomposability to deal with very large problems, the boundaries imposed by dynamic programming are lifted. This paper develops approximation techniques for stationary infinite horizon problems with discounted costs, in the framework of mathematical programming. A reference is given for parallel results for stochastic dynamic optimization problems.

Lisa A. Korf

### An Ergodic Theorem for Stochastic Programming Problems

To justify the use of sampling to solve stochastic programming problems one usually relies on a law of large numbers for random lsc (lower semicontinuous) functions when the samples come from independent, identical experiments. If the samples come from a stationary process, one can appeal to the ergodic theorem proved here. The proof relies on the ‘scalarization’ of random lsc functions.

Lisa A. Korf, Roger J-B Wets

### Proximal Methods: Decomposition and Selection

We study the convergence of a general splitting method for finding a particular zero of the sum of two maximal monotone operators A and B, in which a forward-backward splitting iteration is applied to a sequence of approximating operators An, Bn converging to A and B. We prove that when these approximating operators converge sufficiently slowly the sequence generated by the method strongly converges towards a particular solution of the given problem, provided that there is one. In the process, we revisited and made a link between the Passty, (PS), and the Barycentric, (BPM), methods (cf. [22], [9]). It is well-known that a weighted average of the iterates of (PS), weighted by stepsizes, weakly converges to a solution of the given problem. However such ergodic convergence does not seem very useful in practice. Here, we combine (PS) iteration with Tikhonov regularization to obtain strong convergence of the generated sequence to the solution of minimal norm. Finally, to motivate the selection concept, we briefly propose an application in the finance area : Portfolio Selection.

N. Lehdili, C. Ould Ahmed Salem

An optimization technique is developed for rigid-plastic axisymmetric plates subjected to impulsive loading. The thickness of the plate is assumed to be piecewise constant. Both, ideal homogeneous and fiber reinforced materials are considered. Necessary optimality conditions are established with the aid of variational methods of the theory of optimal control whereas the method of mode form motions is used. Numerical results are presented for an annular plate clamped at the inner edge and free at the outer edge.

Jaan Lellep

### From Fixed Point Regularization to Constraint Decomposition in Variational Inequalities

Extending the Tikhonov regularization method to the fixed point problem for a nonexpansive self mapping P on a real Hilbert space H, generates a family of fixed points u r of strongly nonexpansive self mappings P r on H with positive parameter r tending to 0. If the fixed point set C of P is nonempty, then u r converges strongly to u* the unique solution to some monotone variational inequality on the (closed convex) subset C. The iteration method suitably combined with this regularization generates a sequence that converges strongly to u*. When C is a priori defined by finitely many convex inequality constraints, expressing C as the fixed point set of a suitable nonexpansive mapping and applying the above method lead to an iterative scheme in which each step is decomposed in finitely many successive or parallel projections or proximal computations.

Bernard Lemaire, Cheikh Ould Ahmed Salem

### Large Scale Molecular Conformation via the Exact Distance Geometry Problem

We develop in this paper a method based on a d.c. (difference of convex functions) optimization approach called DCA for solving large-scale exact distance geometry problem. Requiring only matrix-vector products and one Cholesky factorization, the DCA seems to be robust and efficient in large scale problems. Moreover it allows exploiting sparsity of the given distance matrix. A technique using the triangle inequality to generate a complete approximate distance matrix was investigated in order to compute a good starting point for the DCA. Numerical simulations of the molecular conformation problems with up to 12288 variables are reported which prove the robustness, the efficiency, and the globality of our algorithms.

Le Thi Hoai An, Pham Dinh Tao

### Adaptive Scaling and Convergence Rates of a Separable Augmented Lagrangian Algorithm

We analyze the numerical behaviour of a separable Augmented Lagrangian algorithm. This algorithm which is equivalent to the Proximal Decomposition algorithm in the convex case, uses a dual sequence which convergence is associated with the inverse of the primal penalty parameter. As a consequence, an optimal value for that parameter, which is thus more like a scaling parameter than like a penalty one, is expected, confirming former theoretical results in the strongly convex case.We propose an implementable algorithm where the scaling parameter is adjusted at each iteration in order to keep the same rate of convergence for both primal and dual sequences. The correcting effect of that parameter update is illustrated on small quadratic problems. The autoscaled decomposition algorithm is then tested on larger block-angular problems with convex or non convex separable cost functions.

Philippe Mahey, Jean-Pierre Dussault, Abdelhamid Benchakroun, Abdelouahed Hamdi

### Numerical Considerations on Decomposition and Augmented Lagrangians

We consider a general large scale constrained minimization problem with separable and differentiable objective function and constraints. The augmented Lagrangian method cannot be directly applied to such a problem because the resulting penalty function is not separable. We first recall two existing approaches (a method based on a linearization of the quadratic term in the augmented Lagrangian) and SALA (for Separable Augmented Lagrangian Algorithm) which is a generalization of Proximal Decomposition. We also present a new approach based on the decomposition of the line search in primal-dual algorithms but which requires the convexity of the augmented Lagrangian function. We finally compare the three methods on small and large scale test examples, insisting on robustness in parameter tuning.

Jean-Baptiste Malézieux, Jean-Christophe Culioli

### Bicriteria Linear Fractional Optimization

In multiple objective programming it is generally more convenient to study the efficient outcome instead of the efficient solution set. In general an approximation of this efficient outcome is obtained by solving a sequence of optimization problems. In this work we consider a special class of bicriteria optimization problems with linear fractional objectives and linear constraints. It is shown that the efficient outcome is the graph of a piecewise linear fractional curve in the plane, which can be easily computed. A finite algorithm is presented which generates this curve by simply considering some particular edges of the constraint polyhedron.

Christian Malivert, Nicolae Popovici

### Tikhonov Fixed—Point Regularization

The main purpose of this note is to propose viscosity approximation methods which amount to selecting a particular fixed-point of a given nonexpansive self mapping in a general Hilbert space. The connection with the selection principles of Attouch, in the context of convex minimization and monotone inclusion problems, is made and an application to a semi-coercive elliptic problem is then stated.

Abdellatif Moudafi

### Augmented Lagrangians for General Side Constraints

We consider augmented Lagrangians for the general (nonconvex) optimization problem inf F(x) x ∈ X, G(x) ∈ M, where X is an arbitrary set, M is a closed subset of some real topological vector space Z, and F: X → ℝ, G: X → Z.

Werner Oettli, Dirk Schläger, Michel Théra

### Using Analytic Center and Cutting Planes Methods for Nonsmooth Convex Programming

We propose a method for unconstrained minimization of nondifferentiable convex functions. At each iteration, the algorithm includes a reinitialization procedure, due to a knew cut, and some iterations of the analytic center method, those necessary to lower the upper bound of the best iterate function value. The convergence and the polynomial complexity by iteration are established, and numerical tests with typical problems from the literature are presented. The results are similar to those obtained by bundle methods and the algorithms that use analytic center.

Paulo Roberto Oliveira, Marcos Augusto dos Santos

### Recent Advances on Second-Order Optimality Conditions

We compare some recent proposals dealing with necessary conditions and sufficient conditions for optimization problems with explicit or implicit constraints. We add a new condition which involves an additional term which bears on the constraint only, and for this reason is simpler than the known conditions. We also introduce two new types of second-order generalized derivatives and examine their uses for such a question.

Jean-Paul Penot

### On Some Relations Between Generalized Partial Derivatives and Convex Functions

In [8] and [9] a generalization of the concept of the ordinary gradient was presented, which exists for a broad class of nondifferentiable functions. This concept differs from traditional “set-valued” generalizations by an expansion of the directional derivative into a special orthogonal series.By investigating the coefficients of this series and interpreting them as a kind of partial derivatives formal analogies to properties of classical partial derivatives can be drawn. Moreover such an approach reveals relations between questions appearing within the theory of optimization and results coming from other branches of mathematics, especially from the theory of harmonic and subharmonic functions. Applying the concept of generalized partial derivatives to the case that the functions under considerations are convex, a characterization of convexity, differentiablilty properties and a necessary optimality condition in terms of generalized partial derivatives can be given.

Peter Recht

### A Perturbed and Inexact Version of the Auxiliary Problem Method for Solving General Variational Inequalities with a Multivalued Operator

We consider general variational inequalities with a multivalued maximal monotone operator in a Hilbert space. For solving these problems, Cohen developed several years ago the auxiliary problem method. Perturbed versions of this method have been already studied in the literature for the single-valued case. They allow to consider for example, barrier functions and interior approximations of the feasible domain. In this paper, we present a relaxation of these perturbation methods by using the concept of ε-enlargement of a maximal monotone operator. We prove that, under classical assumptions, the sequence generated by this scheme is bounded and weakly convergent to a solution of the problem. Strong convergence is also obtained under additional conditions.In the particular case of nondifFerentiable convex optimization, the ε-subdifferential will take place of the ε-enlargement and some assumptions for convergence will be weakened. In the nonperturbed situation, our scheme reduces to the projected inexact subgradient procedure.

Geneviève Salmon, Van Hien Nguyen, Jean-Jacques Strodiot

### Contributions to a Multiplier Rule for Multiobjective Optimization Problems

In this paper, we will consider a necessary optimality condition in form of a multiplier rule for so-called K-derived sets, given by Breckner in 1994. For this multiplier rule, we present an alternative method of proof, without making use of the theorem of HoangTuy that was needed in Breckner’s original paper. Furthermore, for this rule it will be derived an additional complementarity relation as well as regularity conditions for the multipliers, the latter according to an idea of Hestenes for the scalar case.

Matthias Sekatzek

### The Maximum Flow in a Time-Varying Network

The classical maximum flow problem is to send flow from the source to the sink as much as possible. The problem is static, in that arc capacity is a constant. Max-flow min-cut theorem of Fold and Fulkerson [1] is a fundamental result which reveals the relationship between the maximum flow and the minimum cut in a static network. It has played an important role not only in investigating the classical maximum flow problem, but also in developing graph theory, since many results could be regarded as its corollaries.In practical situations, there are, however, numerous problems where the attributes of the network under consideration are time-varying. In such a network, a flow must take a certain time to traverse an arc. The transit time on an arc and the capacity of an arc are all time-varying parameters. To depart at the best time, a flow can wait at the beginning vertex of an arc, which is however constrained by a time-varying vertex capacity. For instance, consider a network in which several cargo-transportation services are available between a number of cities. These may include air transportation, sea transportation, and road transportation, which are available at different times and have different capacities. Waiting at a city is permitted, but it is limited by the space of the warehouse, and dependent upon the time. A question often asked is: what is the maximum flow that can be sent between two particular cities within a certain duration T?We show that this time-varying maximum flow problem is NP-complete. We then establish the max-flow min-cut theorem in the time-varying version. With this result, we develop an approach, which can solve the time-varying maximum flow problem with arbitrary waiting time at vertices in a pseudopolynomial time.

D. Sha, X. Cai, C. K. Wong

### Transmission Network Planning Under Uncertainty with Benders Decomposition

This paper treats the problem of the long term transmission expansion planning of an electrical network under uncertainty. A static form of this problem is considered; transmission constraints are represented in the DC approximation (linearized power flow). Uncertainty is taken into account to evaluate the operating cost (demand, availability); the investment variables are binary. The problem is therefore a large stochastic mixed integer one, requiring a considerable computational effort.This article proposes a new formulation of the problem, which is well suited to the application of the Benders Decomposition. Several ways of dealing with the cuts which are obtained at each iteration are described and compared. Numerical tests show the practical interest of the algorithm finally obtained.

Panagiota Tsamasphyrou, Arnaud Renaud, Pierre Carpentier

### On Some Recent Advances and Applications of D.C. Optimization

We review some recent advances of d.c. optimization methods in the analysis and solution of specially structured nonconvex optimization problems, including problems from continuous location, nonconvex quadratic programming and monotonic optimization.

Hoang Tuy

### Backmatter

Weitere Informationen