A new smoothing Newton-type method for second-order cone programming problems

https://doi.org/10.1016/j.amc.2009.06.029Get rights and content

Abstract

A new smoothing function of the well-known Fischer–Burmeister function is given. Based on this new function, a smoothing Newton-type method is proposed for solving second-order cone programming. At each iteration, the proposed algorithm solves only one system of linear equations and performs only one line search. This algorithm can start from an arbitrary point and it is Q-quadratically convergent under a mild assumption. Numerical results demonstrate the effectiveness of the algorithm.

Introduction

The second-order cone (SOC) in Rn(n2), also called the Lorentz cone or the ice-cream cone, is defined asQn=(x1;x2)|x1R,x2Rn-1andx1x2,here and below, · refers to the standard Euclidean norm, n is the dimension of Qn, and for convenience, we write x=(x1;x2) instead of (x1,x2T)T. It is easy to verify that the SOC Qn is self-dual, that is,Qn=Qn=sRn:sTx0,for allxQn.We may often drop the subscripts if the dimension is evident from the context.

For any x=(x1;x2),y=(y1;y2)R×Rn-1, their Jordan product is defined as [1]xy=(xTy;x1y2+y1x2).The Jordan product “”, unlike scalar or matrix multiplication, is not associative, which is the main source on complication in the analysis of second-order cone programming (SOCP).

Second-order cone programming problems are to minimize a linear function over the intersection of an affine space with the Cartesian product of a finite number of SOCs. The study of SOCP is vast important as it covers linear programming, convex quadratic programming, quadratically constraint convex quadratic optimization as well as other problems from a wide range of applications in many fields, such as engineering, optimal control and design, machine learning, robust optimization and combinatorial optimization and so on [2], [3], [4], [5], [6], [7], [8], [9], [10].

Without loss of generality, we consider the SOCP problem with a single SOC(PSOCP)min{c,x:Ax=b,xQ}and its dual problem(DSOCP)max{b,y:ATy+s=c,sQ,yRm},where cRn,ARm×n and bRm, with an inner product ·,·, are given data. xQ is variable and the set Q is an SOC with dimension n. Note that our analysis can be easily extended to the general case with Cartesian product of SOCs.

We call xQ primal feasible if Ax=b. Similarly (y,s)Rm×Q is called dual feasible if ATy+s=c. For a given primal–dual feasible point (x,y,s)Q×Rm×Q, x,s is called the duality gap due to the well-known weak dual theorem, i.e., x,s0, which follows thatc,x-b,y=ATy+s,x-Ax,y=x,s0.Thus we note that x,s=0 is sufficient for optimality of primal and dual feasible (x,y,s)Q×Rm×Q.

Throughout this paper, we make the following Assumption:

Assumption 1.1

Both (PSOCP) and its dual (DSOCP) are strictly feasible.

It is well-known that under Assumption 1.1, the SOCP is equivalent to its optimality conditions:Ax=b,ATy+s=c,xs=0,x,sQ,yRm,where xs=0 is usually referred as the complementary condition.

There are an extensive literature focusing on interior-point methods (IPMs) for (PSOCP) and (DSOCP) (see, e.g., [11], [12], [13], [5], [14], [10], [15], [16] and references therein). IPMs typically deal with the following perturbation of the optimality conditions (3):Ax=b,ATy+s=c,xs=μe,x,sQ,yRm,where μ>0 and e=(1;0)R×Rn-1 is identity element. This set of conditions are called the central path conditions since they define a trajectory approaching the solution set as μ0. Conventional IPMs usually apply a Newton-type method to the equations in (4) with a suitable line search dealing with constraints xQ and sQ explicitly.

Recently smoothing Newton methods [17], [18], [19], [20], [21], [22], [23], [24], [25] have attracted a lot of attention partially due to their superior numerical performances. However, some algorithms [17], [20] depend on the assumptions of uniform nonsingularity and strict complementarity. Without the uniform nonsingularity assumption, the algorithm given in [26] usually needs to solve two linear systems of equations and to perform at least two line searches at each iteration. Lastly, Qi et al. [21] proposed a class of new smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities under a nonsingularity assumption. The method in [21] was shown to be locally superlinearly/quadratically convergent without strict complementarity. Moreover, the smoothing methods available are mostly for solving the complementarity problems [17], [27], [20], [21], [22], [24], [28], [29], but there is little work on smoothing methods for the SOCP.

Under certain assumptions, IPMs and smoothing methods are globally convergent. However, with the exception of the infeasible interior-point methods [17], [30], [21], they need a feasible starting point.

Fukushima et al. studied Lipschitzian and differential properties of several typical smoothing functions for second-order cone complementarity problems. They derived computable formulas for their Jacobians, which provide a theoretical foundation for constructing corresponding non-interior point methods. The purpose of this paper is just to present such a non-interior-point method for problem (PSOCP), which employs a new smoothing function to characterize the central path conditions. We stress on the demonstration of the global convergence and locally Q-quadratic convergence of the proposed algorithm.

The new smoothing algorithm to be discussed here is based on perturbed optimality conditions (4), and the main difference from IPMs is that we reformulate (4) as a smoothing linear system of equations. It is shown that our algorithm has the following good properties:

  • (i)

    The algorithm can start from an arbitrary initial point.

  • (ii)

    The algorithm needs to solve only one linear system of equations and perform only one line search at each iteration.

  • (iii)

    The algorithm is globally and locally Q-quadratically convergent under mild assumption, without strict complementarity. The result is stronger than the corresponding results for IPMs.

The following notations and terminologies are used throughout the paper. We use “,” for adjoining vectors and matrices in a row and “;” for adjoining them in a column. Rn(n1) denotes the space of n-dimensional real column vectors, and Rn×Rm is identified with Rn+m. Denote x2=xx. For any x,yRn, we write xQy or xQy (respectively, xQy or xQy) if x-yQ or y-xQ (respectively, x-yintQ or y-xintQ, where intQ denotes the interior of Q). R+(R++) denotes the set of nonnegative (positive) reals. For any x,yRn, the Euclidean inner product and norm are denoted by x,s=xTy and x=xTx, respectively.

The paper is organized as follows: In Section 2, we give the equivalent formulation of the perturbed optimality conditions and some preliminaries. A smoothing function associated with the SOC Q and its properties are given in Section 3. In Section 4, we describe our new algorithm. The convergence of the new algorithm is analyzed in Section 5. Numerical results are shown in Section 6.

Section snippets

Preliminaries

For any vector x=(x1;x2)R×Rn-1, we define its spectral decomposition associated with SOC Q asx=λ1u1+λ2u2,where the spectral values λi and the associated spectral vectors ui of x are given byλi=x1+(-1)i+1x2,ui=121;(-1)i+1x2x2,x20;12(1;(-1)i+1ω),x2=0,for i=1,2, with any ωRn-1 such that ω=1. If x20, then the decomposition (5) is unique. Some interesting properties of λ1,λ2 and u1,u2 are summarized below.

Property 2.1

[22]

For any x=(x1;x2)R×Rn-1, the spectral values λ1,λ2 and the associated spectral

A smoothing function associated with the SOC Q and its properties

First, let us introduce a smoothing function. In [22], it has been shown that the vector-valued Fischer–Burmeister function ϕFB(x,s):Rn×RnRn defined byϕFB(x,s)=x+s-(x2+s2)1/2satisfies the following important propertyϕFB(x,s)=0xQ0,sQ0,xs=0.The Fischer–Burmeister function has many interesting properties. In [29], Pan and Chen showed that ϕFB is semismooth which leads to a semismooth Newton method for SOCCPs. In [28], Chen and Tseng proved that ϕFB2 is smooth everywhere which yields a merit

Description of the algorithm

Based on the smoothing function (10) introduced in the previous section, the aim of this section is to propose the smoothing Newton-type algorithm for the SOCP and show the well-definedness of it under suitable assumptions.

Let z(μ,x,y). By using the smoothing function (10), we define the function G(μ,x,y):R++×Rn×RmR++×Rm×Rn byG(z)μAx-bΦ(μ,x,c-ATy).In view of (9), (18), z(μ,x,y) is a solution of the system G(z)=0 if and only if (x,y,c-ATy) satisfies the optimality conditions (3).

It is

Convergence analysis

In this section, we analyze the global and local convergence properties of Algorithm 4.1. It is shown that any accumulation point of the iterative sequence is a solution of the system G(z)=0. If the accumulation point z satisfies a nonsingularity assumption, then the iterative sequence converges to z locally Q-quadratically without strict complementarity. To show the global convergence of Algorithm 4.1, we need the following Lemma (see [21, Lemma 5]).

Lemma 5.1

Suppose that Assumption 4.1 holds. For any

Numerical results

In this section, we conduct some numerical experiments to solve some SOCPs by Algorithm 4.1 and report the computational results. All experiments were done at a PC with 2.0 GHz CPU and 512 MB memory. The computer codes were all written in Matlab 7.0.1.

Throughout the experiments, the following parameters were used in the algorithm:μ0=0.01,σ=0.35,δ=0.65,γ=0.90.We used G(z)10-8 as the stopping criterion. The starting point was chosen to be x0=eRn,y0=0Rm.

Firstly, we test the following problems

Acknowledgements

The work is supported by National Natural Science Foundation of China (10571109), Natural Science Foundation of Shandong (Y2008A01), and Key Supported Discipline of Shanxi Province (070104).

The authors would like to thank the anonymous referees for their valuable comments and suggestions on the paper, which have considerably improved the paper. Especially, we sincerely thank Professors Shaohua Pan and Jein-Shan Chen for their help to improve the numerical experiments.

References (32)

  • Y.J. Kuo et al.

    Interior point methods for second-order cone programming and OR applications

    Computational Optimization and Applications

    (2004)
  • F. Alizadeh et al.

    Second-order cone programming

    Mathematical Programming

    (2003)
  • J. Peng et al.

    Primal–dual Interior-point methods for second-order conic optimization based on self-regular proximities

    SIAM Journal on Optimization

    (2002)
  • L. Faybusovich

    Euclidean Jordan algebras and interior-point algorithms

    Positivity

    (1997)
  • Y.E. Nesterov et al.

    Primal–dual interior-point methods for self-scaled cones

    SIAM Journal on Optimization

    (1998)
  • J.F. Sturm

    Using SeDuMi 1.02 a MATLAB toolbox for optimization over symmetric cones

    Optimization Methods and Software

    (1999)
  • Cited by (22)

    • A globally and quadratically convergent smoothing newton method for solving second-order cone optimization

      2015, Applied Mathematical Modelling
      Citation Excerpt :

      In our method, a system of linear equations is solved only approximately by using the inexact Newton method (see, Remark 4.1 below and [21]). Notice that in existing smoothing methods (e.g., [6–8,10,19]), the corresponding system is exactly solved at each iteration. If Assumptions 1.1 and 1.2 hold, then the iteration sequence generated by our method is bounded and hence it has at least one accumulation point.

    • A new non-interior continuation method for solving the second-order cone complementarity problem

      2014, Applied Mathematics and Computation
      Citation Excerpt :

      A non-interior continuation method for the SOCCP A similar algorithmic framework was originally introduced by Qi et al. [26] and has been extensively studied for solving the cone optimization problems (e.g., [4,8–10,14,18,29–31]). In Algorithm 4.1, we give two step lengths, in which Step 3 is the classical Armijo line search which has been extensively used in smoothing-type methods, while Step 3′ is a new search rule.

    View all citing articles on Scopus
    View full text