Elsevier

Swarm and Evolutionary Computation

Volume 44, February 2019, Pages 665-679
Swarm and Evolutionary Computation

Push and pull search for solving constrained multi-objective optimization problems

https://doi.org/10.1016/j.swevo.2018.08.017Get rights and content

Abstract

This paper proposes a push and pull search (PPS) framework for solving constrained multi-objective optimization problems (CMOPs). To be more specific, the proposed PPS divides the search process into two different stages: push and pull search stages. In the push stage, a multi-objective evolutionary algorithm (MOEA) is used to explore the search space without considering any constraints, which can help to get across infeasible regions very quickly and to approach the unconstrained Pareto front. Furthermore, the landscape of CMOPs with constraints can be probed and estimated in the push stage, which can be utilized to conduct the parameter setting for the constraint-handling approaches to be applied in the pull stage. Then, a modified form of a constrained multi-objective evolutionary algorithm (CMOEA), with improved epsilon constraint-handling, is applied to pull the infeasible individuals achieved in the push stage to the feasible and non-dominated regions. To evaluate the performance regarding convergence and diversity, a set of benchmark CMOPs and a real-world optimization problem are used to test the proposed PPS (PPS-MOEA/D) and state-of-the-art CMOEAs, including MOEA/D-IEpsilon, MOEA/D-Epsilon, MOEA/D-CDP, MOEA/D-SR, C-MOEA/D and NSGA-II-CDP. The comprehensive experimental results show that the proposed PPS-MOEA/D achieves significantly better performance than the other six CMOEAs on most of the tested problems, which indicates the superiority of the proposed PPS method for solving CMOPs.

Introduction

Many real-world optimization problems can be summarized as optimizing a number of conflicting objectives simultaneously with a set of equality and/or inequality constraints. Such problems are called constrained multi-objective optimization problems (CMOPs). Without lose of generality, a CMOP considered in this paper can be defined as follows [1]:minimizeF(x)=(f1(x),,fm(x))Tsubjecttogi(x)0,i=1,,qhj(x)=0,j=1,,pxRnwhere F(x)=(f1(x),f2(x),,fm(x))T is an m-dimensional objective vector, and F(x)Rm. gi(x) ≥ 0 is an inequality constraint, and q is the number of inequality constraints. hj(x) = 0 is an equality constraint, and p represents the number of equality constraints. xRn is an n-dimensional decision vector.

When solving CMOPs with inequality and/or equality constraints, we usually convert the equality constraints into inequality constraints by introducing an extremely small positive number δ. The detailed transformation is given as follows:hj(x)δ|hj(x)|0

To deal with a set of constraints in CMOPs, the overall constraint violation is a widely used approach, which summarizes the violations into a single scalar as follows:ϕ(x)=i=1q|min(gi(x),0)|+j=1p|min(hj(x),0)|

Given a solution xkRn, if ϕ(xk) = 0, xk is feasible. All the feasible solutions constitute a feasible solution set S, which is defined as S={xϕ(x)=0,xRn}. For any two solutions xa, xb ∈ S, xa is said to dominate xb if fi(xa) ≤ fi(xb) for each i ∈ {1, …, m} and fj(xa) < fj(xb) for at least one j ∈ {1, …, m}, denoted as xaxb. If there is no other solution in S dominating solution x, then x is called a Pareto optimal solution. All of the Pareto optimal solutions constitute a Pareto optimal set (PS). The mapping of the PS in the objective space is called a Pareto optimal front (PF), which is defined as PF = {F(x)∣x ∈ PS}.

A key issue in CMOEAs is to maintain a balance between minimizing the objectives and satisfying the constraints. In fact, most constraint-handling mechanisms in evolutionary computation are designed to try to achieve this balance. For example, the penalty function approach adopts a penalty factor λ to maintain the balance between minimizing the objectives and satisfying the constraints. It converts a CMOP into an unconstrained MOP by adding the overall constraint violation multiplied by a predefined penalty factor λ to each objective [2]. In the case of λ = , it is called a death penalty approach [3], which means that infeasible solutions are totally unacceptable. If λ is a static value during the search process, it is called a static penalty approach [4]. If λ is changing during the search process, it is called a dynamic penalty approach [5]. In the case in which λ is changing according to the information collected during the search process, it is called an adaptive penalty approach [[6], [7], [8], [9]].

In order to avoid the need to tune the penalty factors, another type of constraint-handling method is also in use, which compares the objectives and constraints separately. Representative examples include the constraint dominance principle (CDP) [10], epsilon constraint-handling method (EC) [11], stochastic ranking approach (SR) [12], and so on. In CDP [10], three basic rules are adopted to compare any two solutions. In the first rule, given two solutions xi,xjRn, if xi is feasible and xj is infeasible, xi is better than xj. If xi and xj are both infeasible, the one with a smaller constraint violation is better. In the last rule, xi and xj are both feasible, and the one dominating the other is better. CDP is a popular constraint-handling method, as it is simple and has no extra parameters. However, it is not suitable for solving CMOPs with very small and narrow feasible regions [13]. For many generations, most or even all solutions in the working population are infeasible when solving CMOPs with this property. In addition, the diversity of the working population can hardly be well maintained, because the selection of solutions is only based on the constraint violations according to the second rule of CDP.

In order to solve CMOPs with small and narrow feasible regions, the epsilon constraint-handling (EC) [11] approach has been suggested. It is similar to CDP except for the relaxation of the constraints. In EC, the relaxation of the constraints is controlled by the epsilon level ɛ, which can help to maintain the diversity of the working population in the case when most solutions are infeasible. To be more specific, if the overall constraint violation of a solution is less than ɛ, this solution is deemed feasible. The epsilon level ɛ is a critical parameter in EC. In the case of ɛ = 0, EC is the same as CDP. Although EC can be used to solve CMOPs with small feasible regions, controlling the value of ɛ properly is not at all trivial.

Both CDP [10] and EC [11] first compare the constraints, then compare the objectives. SR [12] is different from CDP and EC in terms of the order of comparison. It adopts a probability parameter pf ∈ [0, 1] to decide if the comparison is to be based on objectives or constraints. For any two solutions, if a random number is less than pf, the one with the non-dominated objectives is deemed better—i.e., the comparison is based on objectives. On the other hand, if the random number is greater than pf, the comparison is based first on the constraints, then on the objectives, as is the case with CDP. In the case of pf = 0, SR is equivalent to CDP.

In recent years, much work has been done in the field of many-objective evolutionary algorithms (MaOEAs) [14], which gives us new ways to solve CMOPs. In order to balance the constraints and the objectives, some researchers adopt multi-objective evolutionary algorithms (MOEAs) or MaOEAs (when the number of objectives is greater than three) to deal with constraints [15]. For an M-objective CMOP, its constraints can be converted into one or k extra objectives. Then the M-objective CMOP is transformed into an (M + 1)- or (M + k)-objective unconstrained MOP, which can be solved by MOEAs or MaOEAs. Representative examples include Cai and Wang's Method (CW) [16] and the infeasibility driven evolutionary algorithms (IDEA) [17].

To maintain a good balance between minimizing the objectives and satisfying the constraints, some researchers combine several constraint-handling mechanisms, which can be further divided into two categories, including adopting different constraint-handling mechanisms in either different evolutionary stages or in different subproblems. For example, the adaptive trade-off model (ATM) [18] uses two different constraint-handling mechanisms, including a multi-objective approach and adaptive penalty functions, in different evolutionary stages. The ensemble of constraint-handling methods (ECHM) [19] uses three different constraint-handling techniques, including epsilon constraint-handling (EC) [11], self-adaptive penalty functions (SP) [9] and superiority of feasible solutions (SF) [20]. Three subpopulations are generated in ECHM, and each subpopulation uses a different constraint-handling method.

In this paper, we propose a biphasic CMOEA, namely push and pull search (PPS), to balance objective minimization and constraint satisfaction. Unlike the above-mentioned constraint-handling methods, the PPS divides the search process into two different stages. In the first stage, only the objectives are optimized, which means the working population is pushed toward the unconstrained PF without considering any constraints. Furthermore, the landscape of constraints in CMOPs can be estimated in the push stage, which can be applied to conduct the parameter setting of the constraint-handling approaches to be applied in the pull stage. In the pull stage, an improved epsilon constraint-handling approach is adopted to pull the working population to the constrained PF. In summary, it provides a new framework and has the following potential advantages.

  • 1.

    It has the ability to get across large infeasible regions of the constrained PF. Since the constraints are ignored in the push stage, any infeasible regions encountered before the true PF present no barriers for the working population.

  • 2.

    It facilitates the parameter setting in the constraint-handling methods. Since the landscape of constraints has already been explored by the push process, much information has been discovered and gathered to guide the parameter setting for the pull stage.

The rest of the paper is organized as follows. Section 2 introduces the general idea of PPS. Section 3 gives an instantiation of PPS in the framework of MOEA/D, called PPS-MOEA/D. Section 4 designs a set of experiments to compare the proposed PPS-MOEA/D with six other CMOEAs, including MOEA/D-IEpsilon [21], MOEA/D-Epsilon [22], MOEA/D-SR [23], MOEA/D-CDP [23], C-MOEA/D [24] and NSGA-II-CDP [10]. Then, a real-world optimization problem, namely the robot gripper optimization, is used to test the performance of PPS-MOEA/D and the other six CMOEAs in Section 5. Finally, conclusions are drawn in section 6.

Section snippets

The general framework of push and pull search

Constraints define infeasible regions in the decision space, and sometimes are defined in such a way that they have an effect on the PF in the objective space. The influence of infeasible regions on PFs can be generally classified into three different situations. For each situation, the search behavior of PPS is illustrated by Fig. 1, Fig. 2, Fig. 3, respectively, which can be summarized as follows.

  • 1.

    Infeasible regions block the way towards the PF, as illustrated by Fig. 1(a). In this

An instantiation of PPS in MOEA/D

This section describes the details of an instantiation of the push search method and the pull search method in the framework of a particular type of MOEA/D search, thus capturing the entire PPS method.

. Push Subproblem.

. Pull Subproblem.

Experimental settings

To evaluate the performance of the proposed PPS method, six other CMOEAs, including MOEA/D-IEpsilon [21], MOEA/D-Epsilon [22], MOEA/D-SR [23], MOEA/D-CDP [23], C-MOEA/D [24] and NSGA-II-CDP [10], are tested on LIR-CMOP1-14 [21], which have large infeasible regions in the search space. The detailed parameters in each algorithm are listed as follows:

  • 1.

    The mutation probability Pm = 1∕n (n denotes the dimension of a decision vector). The distribution index in the polynomial mutation is set to 20.

  • 2.

    DE

Robot gripper optimization

In this section, a real-world optimization problem—the robot gripper optimization problem is formulated. Then, the proposed PPS-MOEA/D and the other six CMOEAs are tested on this optimization problem.

Conclusion

This paper proposes a general PPS framework to deal with CMOPs. More specifically, the search process of PPS is divided into two stages—namely, push and pull search processes. At the push stage, constraints are ignored, which can help PPS to cross infeasible regions in front of the unconstrained PF. Moreover, the landscape affected by constraints can be estimated during the push stage, and this information, such as the ratio of feasible to infeasible solutions and the maximum overall constraint

Acknowledgement

This research work was supported by Guangdong Key Laboratory of Digital Signal and Image Processing, the National Natural Science Foundation of China under Grant (61175073, 61300159, 61332002, 51375287), the Natural Science Foundation of Jiangsu Province of China under grant SBK2018022017, China Postdoctoral Science Foundation under grant 2015M571751, the Project of International, as well as Hongkong, Macao\&Taiwan Science and Technology Cooperation Innovation Platform in Universities in

References (33)

  • J.A. Joines et al.

    On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with ga's

  • J.C. Bean et al.

    A Dual Genetic Algorithm for Bounded Integer Programs

    (1993)
  • D.W. Coit et al.

    Adaptive penalty methods for genetic optimization of constrained combinatorial problems

    Inf. J. Comput.

    (1996)
  • A. Ben Hadj-Alouane et al.

    A genetic algorithm for the multiple-choice integer program

    Oper. Res.

    (1997)
  • Y.G. Woldesenbet et al.

    Constraint handling in multiobjective evolutionary optimization

    IEEE Trans. Evol. Comput.

    (2009)
  • K. Deb et al.

    A fast and elitist multiobjective genetic algorithm: NSGA-II

    IEEE Trans. Evol. Comput.

    (2002)
  • Cited by (271)

    View all citing articles on Scopus
    View full text