A new smoothing Newton-type method for second-order cone programming problems
Introduction
The second-order cone (SOC) in , also called the Lorentz cone or the ice-cream cone, is defined ashere and below, refers to the standard Euclidean norm, n is the dimension of , and for convenience, we write instead of . It is easy to verify that the SOC is self-dual, that is,We may often drop the subscripts if the dimension is evident from the context.
For any , their Jordan product is defined as [1]The Jordan product “”, unlike scalar or matrix multiplication, is not associative, which is the main source on complication in the analysis of second-order cone programming (SOCP).
Second-order cone programming problems are to minimize a linear function over the intersection of an affine space with the Cartesian product of a finite number of SOCs. The study of SOCP is vast important as it covers linear programming, convex quadratic programming, quadratically constraint convex quadratic optimization as well as other problems from a wide range of applications in many fields, such as engineering, optimal control and design, machine learning, robust optimization and combinatorial optimization and so on [2], [3], [4], [5], [6], [7], [8], [9], [10].
Without loss of generality, we consider the SOCP problem with a single SOCand its dual problemwhere and , with an inner product , are given data. is variable and the set is an SOC with dimension n. Note that our analysis can be easily extended to the general case with Cartesian product of SOCs.
We call primal feasible if . Similarly is called dual feasible if . For a given primal–dual feasible point , is called the duality gap due to the well-known weak dual theorem, i.e., , which follows thatThus we note that is sufficient for optimality of primal and dual feasible .
Throughout this paper, we make the following Assumption: Assumption 1.1 Both (PSOCP) and its dual (DSOCP) are strictly feasible.
It is well-known that under Assumption 1.1, the SOCP is equivalent to its optimality conditions:where is usually referred as the complementary condition.
There are an extensive literature focusing on interior-point methods (IPMs) for (PSOCP) and (DSOCP) (see, e.g., [11], [12], [13], [5], [14], [10], [15], [16] and references therein). IPMs typically deal with the following perturbation of the optimality conditions (3):where and is identity element. This set of conditions are called the central path conditions since they define a trajectory approaching the solution set as . Conventional IPMs usually apply a Newton-type method to the equations in (4) with a suitable line search dealing with constraints and explicitly.
Recently smoothing Newton methods [17], [18], [19], [20], [21], [22], [23], [24], [25] have attracted a lot of attention partially due to their superior numerical performances. However, some algorithms [17], [20] depend on the assumptions of uniform nonsingularity and strict complementarity. Without the uniform nonsingularity assumption, the algorithm given in [26] usually needs to solve two linear systems of equations and to perform at least two line searches at each iteration. Lastly, Qi et al. [21] proposed a class of new smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities under a nonsingularity assumption. The method in [21] was shown to be locally superlinearly/quadratically convergent without strict complementarity. Moreover, the smoothing methods available are mostly for solving the complementarity problems [17], [27], [20], [21], [22], [24], [28], [29], but there is little work on smoothing methods for the SOCP.
Under certain assumptions, IPMs and smoothing methods are globally convergent. However, with the exception of the infeasible interior-point methods [17], [30], [21], they need a feasible starting point.
Fukushima et al. studied Lipschitzian and differential properties of several typical smoothing functions for second-order cone complementarity problems. They derived computable formulas for their Jacobians, which provide a theoretical foundation for constructing corresponding non-interior point methods. The purpose of this paper is just to present such a non-interior-point method for problem (PSOCP), which employs a new smoothing function to characterize the central path conditions. We stress on the demonstration of the global convergence and locally Q-quadratic convergence of the proposed algorithm.
The new smoothing algorithm to be discussed here is based on perturbed optimality conditions (4), and the main difference from IPMs is that we reformulate (4) as a smoothing linear system of equations. It is shown that our algorithm has the following good properties:
- (i)
The algorithm can start from an arbitrary initial point.
- (ii)
The algorithm needs to solve only one linear system of equations and perform only one line search at each iteration.
- (iii)
The algorithm is globally and locally Q-quadratically convergent under mild assumption, without strict complementarity. The result is stronger than the corresponding results for IPMs.
The following notations and terminologies are used throughout the paper. We use “,” for adjoining vectors and matrices in a row and “;” for adjoining them in a column. denotes the space of n-dimensional real column vectors, and is identified with . Denote . For any , we write or (respectively, or ) if or (respectively, or , where denotes the interior of ). denotes the set of nonnegative (positive) reals. For any , the Euclidean inner product and norm are denoted by and , respectively.
The paper is organized as follows: In Section 2, we give the equivalent formulation of the perturbed optimality conditions and some preliminaries. A smoothing function associated with the SOC and its properties are given in Section 3. In Section 4, we describe our new algorithm. The convergence of the new algorithm is analyzed in Section 5. Numerical results are shown in Section 6.
Section snippets
Preliminaries
For any vector , we define its spectral decomposition associated with SOC aswhere the spectral values and the associated spectral vectors of x are given byfor , with any such that . If , then the decomposition (5) is unique. Some interesting properties of and are summarized below. Property 2.1 For any , the spectral values and the associated spectral[22]
A smoothing function associated with the SOC and its properties
First, let us introduce a smoothing function. In [22], it has been shown that the vector-valued Fischer–Burmeister function defined bysatisfies the following important propertyThe Fischer–Burmeister function has many interesting properties. In [29], Pan and Chen showed that is semismooth which leads to a semismooth Newton method for SOCCPs. In [28], Chen and Tseng proved that is smooth everywhere which yields a merit
Description of the algorithm
Based on the smoothing function (10) introduced in the previous section, the aim of this section is to propose the smoothing Newton-type algorithm for the SOCP and show the well-definedness of it under suitable assumptions.
Let . By using the smoothing function (10), we define the function byIn view of (9), (18), is a solution of the system if and only if satisfies the optimality conditions (3).
It is
Convergence analysis
In this section, we analyze the global and local convergence properties of Algorithm 4.1. It is shown that any accumulation point of the iterative sequence is a solution of the system . If the accumulation point satisfies a nonsingularity assumption, then the iterative sequence converges to locally Q-quadratically without strict complementarity. To show the global convergence of Algorithm 4.1, we need the following Lemma (see [21, Lemma 5]). Lemma 5.1 Suppose that Assumption 4.1 holds. For any
Numerical results
In this section, we conduct some numerical experiments to solve some SOCPs by Algorithm 4.1 and report the computational results. All experiments were done at a PC with 2.0 GHz CPU and 512 MB memory. The computer codes were all written in Matlab 7.0.1.
Throughout the experiments, the following parameters were used in the algorithm:We used as the stopping criterion. The starting point was chosen to be .
Firstly, we test the following problems
Acknowledgements
The work is supported by National Natural Science Foundation of China (10571109), Natural Science Foundation of Shandong (Y2008A01), and Key Supported Discipline of Shanxi Province (070104).
The authors would like to thank the anonymous referees for their valuable comments and suggestions on the paper, which have considerably improved the paper. Especially, we sincerely thank Professors Shaohua Pan and Jein-Shan Chen for their help to improve the numerical experiments.
References (32)
- et al.
Applications of second-order cone programming
Linear Algebra and its Applications
(1998) - et al.
Robust supergain beamforming for circular array via second-order cone programming
Applied Acoustics
(2005) - et al.
Robust BMPM training based on second-order cone programming and its application in medical diagnosis
Neural Networks
(2008) - et al.
The convergence of a one-step smoothing Newton method for -NCP based on a new smoothing NCP-function
Journal of Computational and Applied Mathematics
(2008) - et al.
Analysis on Symmetric Cones
(1994) - et al.
Second order cone programming approaches for handling missing and uncertain data
Journal of Machine Learning Research
(2006) - et al.
An efficient support vector machine learning method with second-order cone programming for large-scale problems
Applied Intelligence
(2005) - et al.
Extension of primal–dual interior point algorithms to symmetric cones
Mathematical Programming, Series A
(2003) - et al.
Optimal magnetic shield design with second-order cone programming
SIAM Journal of Science and Computation
(2003) - et al.
Contact analysis of cable networks by using second-order cone programming
SIAM Journal of Science and Computation
(2006)
Interior point methods for second-order cone programming and OR applications
Computational Optimization and Applications
Second-order cone programming
Mathematical Programming
Primal–dual Interior-point methods for second-order conic optimization based on self-regular proximities
SIAM Journal on Optimization
Euclidean Jordan algebras and interior-point algorithms
Positivity
Primal–dual interior-point methods for self-scaled cones
SIAM Journal on Optimization
Using SeDuMi 1.02 a MATLAB toolbox for optimization over symmetric cones
Optimization Methods and Software
Cited by (22)
A globally and quadratically convergent smoothing newton method for solving second-order cone optimization
2015, Applied Mathematical ModellingCitation Excerpt :In our method, a system of linear equations is solved only approximately by using the inexact Newton method (see, Remark 4.1 below and [21]). Notice that in existing smoothing methods (e.g., [6–8,10,19]), the corresponding system is exactly solved at each iteration. If Assumptions 1.1 and 1.2 hold, then the iteration sequence generated by our method is bounded and hence it has at least one accumulation point.
A new non-interior continuation method for solving the second-order cone complementarity problem
2014, Applied Mathematics and ComputationCitation Excerpt :A non-interior continuation method for the SOCCP A similar algorithmic framework was originally introduced by Qi et al. [26] and has been extensively studied for solving the cone optimization problems (e.g., [4,8–10,14,18,29–31]). In Algorithm 4.1, we give two step lengths, in which Step 3 is the classical Armijo line search which has been extensively used in smoothing-type methods, while Step 3′ is a new search rule.
A smoothing Newton method for second-order cone optimization based on a new smoothing function
2011, Applied Mathematics and ComputationApplication of improved multiple imputation method in the estimation of the outstanding claims reserve with missing data
2019, Journal of Nonlinear and Convex AnalysisContinuous variable neighborhood search (C-VNS) for solving systems of nonlinear equations
2019, INFORMS Journal on Computing