Elsevier

Operations Research Letters

Volume 39, Issue 5, September 2011, Pages 397-399
Operations Research Letters

Quasi-Newton methods for solving multiobjective optimization

https://doi.org/10.1016/j.orl.2011.07.008Get rights and content

Abstract

This paper presents a quasi-Newton-type algorithm for nonconvex multiobjective optimization. In this algorithm, the iterations are repeated until termination conditions are met, which is when a suitable descent direction cannot be found anymore. Under suitable assumptions, global convergence is established.

Highlights

► We propose a quasi-Newton-type method for nonconvex multiobjective programming. ► We use the constraint direction norms to improve the performance of the algorithm. ► We point out that the algorithm will stop until there is no descent direction. ► We present the global convergence theorem for the new algorithm.

Introduction

Consider the following unconstrained nonconvex multiobjective optimization minxRnF(x) where F=(F1,,Fm)T:XRm is continuous differentiable and XRn is the domain of F which is assumed to be open.

We define any u,vRm, uvvuR+mvjuj0,jI,u<vvuR++mvjuj>0,jI, where R+m{yRm|y0} and R++m{yRm|y>0}. We first provide the concept of optimality, i.e., Pareto optimality or efficiency, as explained below.

Definition 1

A point xX is local efficiency or local Pareto optimum of F if and only if there exists a neighborhood VX such that there does not exist xV with F(x)F(x),andF(x)F(x).

Note that if X is convex and F is Rm-convex (i.e., if F is componentwise-convex), then local Pareto optimality is equivalent to global Pareto optimality. But in non-convex situations, the equivalence is invalid and a necessary (but in general not sufficient)condition for Pareto optimality which has already been used in [1], [2] to define a descent-type algorithm for multiobjective optimization is defined as follows.

Definition 2

  • (i)

    A point xX is critical (or stationary) for F, if (F(x))R+m, where (h(x)) denotes the range or image space of the gradient of the continuously differentiable function h at x;

  • (ii)

    dRn is a descent direction for F at x if for any jI={1,,m}, the directional derivative of the corresponding componentwise function Fj in direction d satisfies the following condition: Fj(x;d)<0, where the directional derivative at x in the direction d is defined as Fj(x;d)=limα0Fj(x+αd)Fj(x)α.

In this paper, we propose a quasi-Newton method for computing the critical point of smooth multiobjective optimization problems under non-convexity. We further extend the ideas of [1] and propose an extension of the “classical” quasi-Newton method for computing the descent search direction. This generation is based on the following consideration as in [1]: (i) It does not require to compute the Hessian; (ii) It does not require the convex assumption. This work uses the quasi-Newton method to approximate second-order information of each objective function, with a nontrivial change to the “classical” quasi-Newton’s method. Our algorithm replaces each objective function with a quadratic model for the each objective function until the critical point is found. This model improves the performance by constraining the descent direction norms and an small positive scalar to control the descent direction. For the new algorithm, we propose the global convergent result under suitable assumptions.

Section snippets

Main results

A descent direction d is a direction that reduces every objective function value if it is used to update the design x. (3), (4) imply that dRn is a descent direction for F at x if and only if Fj(x)Td<0,jI.

Proposition 1

xX is critical if and only if either one of the following two conditions are satisfied:

  • (i)

    There does not exist a dRn such that for all jIFj(x;d)<0;

  • (ii)

    In the special case, there also exists at least one j0I such thatFj0(x)=0.

Proof

If xX is critical for F, then from (2), for all dRn there

Acknowledgments

We are grateful to the editors and referees for their suggestions and comments. This work was supported by the National Science Foundation Grant of China (No. 10926077). This work was supported by Natural Scientific Research Innovation Foundation in Harbin Institute of Technology (No. HIT.NSRIF.2009074) and program of excellent Team in Harbin Institute of Technology. This work was supported by both China Post-doctoral Science Foundation (No. 01107172) and Heilongjiang Province Post-doctoral

References (4)

  • J.E. Dennis et al.

    Numerical Methods for Unconstrained Nonlinear Equations

    (1996)
  • J. Fliege et al.

    Newton’s method for multiobjective optimization

    SIAM Journal on Optimization

    (2009)
There are more references available in the full text version of this article.

Cited by (77)

View all citing articles on Scopus
View full text