Skip to main content
Erschienen in: Complex & Intelligent Systems 2/2023

Open Access 14.09.2022 | Original Article

A constrained multi-objective optimization algorithm using an efficient global diversity strategy

verfasst von: Wenyi Long, Huachao Dong, Peng Wang, Yan Huang, Jinglu Li, Xubo Yang, Chongbo Fu

Erschienen in: Complex & Intelligent Systems | Ausgabe 2/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

When solving constrained multi-objective optimization problems (CMOPs), multiple conflicting objectives and multiple constraints need to be considered simultaneously, which are challenging to handle. Although some recent constrained multi-objective evolutionary algorithms (CMOEAs) have been developed to solve CMOPs and have worked well on most CMOPs. However, for CMOPs with small feasible regions and complex constraints, the performance of most algorithms needs to be further improved, especially when the feasible region is composed of multiple disjoint parts or the search space is narrow. To address this issue, an efficient global diversity CMOEA (EGDCMO) is proposed in this paper to solve CMOPs, where a certain number of infeasible solutions with well-distributed feature are maintained in the evolutionary process. To this end, a set of weight vectors are used to specify several subregions in the objective space, and infeasible solutions are selected from each subregion. Furthermore, a new fitness function is used in this proposed algorithm to evaluate infeasible solutions, which can balance the importance of constraints and objectives. In addition, the infeasible solutions are ranked higher than the feasible solutions to focus on the search in the undeveloped areas for better diversity. After the comparison tests on three benchmark cases and an actual engineering application, EGDCMO has more impressive performance compared with other constrained evolutionary multi-objective algorithms.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Constrained multi-objective optimization problems (CMOPs) widely exist in many practical applications [1, 2], which need to optimize multiple conflicting objectives and meet one or more constraints simultaneously. The maximum problem can usually be transformed into the minimum problem. Without loss of generality, a CMOP can be mathematically formulated as follows.
$$ \begin{gathered} {\text{Minimize}}\quad F\left( X \right) = \;\left\{ {f_{1} \left( X \right),f_{2} \left( X \right), \ldots ,f_{m} \left( X \right)} \right\} \hfill \\ {\text{subject}}\;{\text{to}}\quad X \in \Omega \hfill \\ \quad \quad \quad \quad \quad g_{i} \left( X \right) \le 0,\;\;i = 1, \ldots ,q\; \hfill \\ \quad \quad \quad \quad \quad h_{j} \left( X \right) = 0,\;\;j = 1, \ldots ,h, \hfill \\ \end{gathered} $$
(1)
where \(X = {(}{\varvec{x}}_{1} {,}\;{\varvec{x}}_{2} , \ldots ,{\varvec{x}}_{n} {)}\) is a candidate solution consisting of n decision variables, \(\Omega \subseteq {\mathbb{R}}^{n}\) is the decision space, \(F\;:\;\Omega \to {\mathbb{R}}^{m}\) is the objective vector consisting of m conflicting objectives,\(g_{i} \left( X \right) \le 0\) is the ith inequality constraint, q is the number of inequality constraints, \(h_{j} \left( X \right) = 0\) is the jth equality constraint and h is the number of equality constraints. The constraint violation of a solution x is calculated by
$$ {\text{CV}}(x) = \sum\limits_{i = 1}^{m} {\max \{ 0,g_{i} (x)\} } + \sum\limits_{j = 1}^{p} {\left| {h_{j} (x)} \right|} . $$
(2)
A solution x is feasible if \({\text{CV}}(x) = 0\), otherwise, x is called an infeasible solution. Given two feasible solutions \(x_{1} ,\;x_{2} \in \Omega\), \(x_{1}\) is said to dominate \(x_{2}\)(denoted as \(x_{1} \prec x_{2}\)), if the following conditions are met, \(f_{i} (x_{1} ) \le f_{i} (x_{2} )\) for every \(i \in \left\{ {1, \ldots ,m} \right\}\), \(f_{i} (x_{1} ) \le f_{i} (x_{2} )\) for at least one \(j \in \left\{ {1, \ldots ,m} \right\}\). A solution \(x^{ * }\) is called Pareto optimal if there is no solution in \(\Omega\) that Pareto dominates it. The set of all Pareto optimal solutions is called the Pareto set (PS), and \({\text{PF}} = \left\{ {F(x)|x \in {\text{PS}}} \right\}\), namely the Pareto set in the objective space is called the Pareto front (PF).
Evolutionary algorithms (EAs) do not have any mechanisms to handle constraints while most real-world design optimization problems often encounter many constraints. A large number of constraint handling techniques have been proposed in the field of evolutionary computing [3]. The existing constraint-handling techniques (CHTs) in constrained multi-objective optimization can be divided into six groups, including penalty functions [4], constrained dominance principle (CDP) [5], ε constrained method [5], stochastic ranking [7], multi-objective optimization-based methods [8], and hybrid methods [9].
In recent years, some constrained multi-objective evolutionary algorithms (CMOEAs) have also been developed to solve CMOPs. Fan et al. [10] proposed a biphasic CMOEA, namely push and pull search (PPS), which divides the search process into two different stages: push and pull search stages. In the push stage, a multi-objective evolutionary algorithm is used to explore the search space without considering any constraints. In the pull stage, the constraints are considered to help the population pull back to the feasible and non-dominated regions. Liu et al. [11] proposed a two-stage framework called ToP. In ToP, the first phase, the original CMOP is transformed into a constrained single-objective optimization problem. In the second phase, all the constraints and objectives are considered to obtain the Pareto optimal solutions. Zhu et al. [12] developed a decomposition-based constrained MOEA called MOEA/D-DAE. In MOEA/DDAE, a detect-and-escape strategy was proposed, which uses the feasible ratio and the change rate of overall constraint violation to detect stagnation, and adjusts the weight of the constraint violation for guiding the search to escape from stagnation states. Ma et al. [13] proposed a multi-stage CMOEA called MSCMO. In MSCMO, in the early stages, only a small number of constraints are considered, which can make the population efficiently converge to the potential feasible region with good diversity. As the algorithm moves to the later stages, more constraints are considered to search for the optimal solutions based on the solutions obtained in the previous stages.
Although constrained multi-objective optimization has been developed for two decades, there exist some limitations in solving problems with small feasible regions and complex nonlinear constraints, which may lead to the population being unable to jump out of the local optimal or local feasible region, and may get trapped into a narrow feasible region, especially when feasible region consists of several disjoint parts and narrow in the search space. Some recent studies do suggest that reasonably utilizing the information on objectives is beneficial to balance the importance of constraints and objectives in the evolutionary process [14, 15], and the information of infeasible solutions can help to resolve the dilemma between exploration versus exploitation [16]. Following this idea, this paper introduces an evolutionary algorithm that uses an efficient global diversity strategy to maintain a certain number of infeasible solutions with well-distributed feature to solve CMOPs. The main contributions of this work are summarized as follows:
1.
An efficient global diversity CMOEA (EGDCMO) is proposed to solve CMOPs with small feasible regions and complex constraints. In this algorithm, an efficient global diversity strategy is proposed to select infeasible solutions for the next generation, which helps to maintain infeasible solutions with well-distributed feature at the early stage of evolution. In this way, the population can effectively converge to the potentially feasible region with global diversity.
 
2.
A new fitness function is used in the proposed EGDCMO to evaluate infeasible solutions, which can balance the importance of constraints and objectives.
 
3.
The proposed EGDCMO is tested on benchmark CMOPs and a real-world application to verify its effectiveness. According to the experimental results, the proposed EGDCMO outperforms some state-of-the-art CMOEAs on both the benchmark CMOPs and the engineering problem.
 
The remainder of this paper is organized as follows. In “Related works”, we first introduce the existing evolutionary approaches developed for CMOPs. “The proposed algorithm” explains the proposed algorithm in detail. The experiments of the benchmark CMOPs and a real-world application are carried out in “Experimental study”. “Conclusions” makes a summary.
In this section, we briefly review the existing multi-objective evolutionary algorithms (MOEAs) to solve CMOPs. The existing MOEAs that maintain a certain proportion of infeasible solutions in the population during evolution are also reviewed.

Existing MOEAs

Generally speaking, existing MOEAs can be divided into the following six categories.
The first category is the penalty function method-based MOEAs [4]. The penalty function method was first used to deal with single objective constrained optimization problems. Its basic principle is to add a penalty function to the original objective function to get an augmented objective function and convert a constrained optimization problem into an unconstrained one [17]. Because of its simple form, it has been widely used in EAs. However, penalty functions require careful tuning of penalty factors to properly punish the infeasible solution and penalty factors always depend on the problem [18]. Researchers have developed some types of penalty functions for EAs, such as static penalty [19], dynamic penalty [20], adaptive penalty [21], and so on. In addition, the penalty function is also used in MOEAs. Woldesenbet et al. [22] proposed a constraint handling technique for multi-objective evolutionary algorithms based on an adaptive penalty function and a distance measure. Jan et al. [23] introduced a penalty function in the MOEA/D-DE updating scheme, which penalizes infeasible solutions based on an adaptive threshold.
The second category is dominance-based approaches. Dominance-based approaches are mainly driven by feasibility information where feasible solutions are always better than infeasible solutions to survive to the next generation. CDP is obtained by extending the feasibility rule, which was proposed by Deb et al. [5] in NSGA-II, and it is one of the most popular and effective CHTs. In this method, given two solutions, \(x_{1}\) is said to dominate \(x_{2}\), if one of the following conditions is met.
  • \({\text{CV}}(x_{1} ) = 0\) and \({\text{CV}}(x_{2} ) = 0\), and for every \(i \in \left\{ {1, \ldots ,m} \right\}\),\(f_{j} (x_{1} ) \le f_{j} (x_{2} )\) for at least one \(j \in \left\{ {1, \ldots ,m} \right\}\).
  • \({\text{CV}}(x_{1} ) = 0\) and \({\text{CV}}(x_{2} ) \ne 0\), that is, \(x_{1}\) is feasible, while \(x_{2}\) is infeasible.
  • \({\text{CV}}(x_{1} ) \ne 0\) and \({\text{CV}}(x_{2} ) \ne 0\), and \({\text{CV}}(x_{1} ) \le {\text{CV}}(x_{2} )\).
Oyama et al. [24] developed a modified dominance relation according to which solutions that violate a fewer number of constraints are preferred. Ma et al. [14] proposed a new fitness function with two rankings, its first ranking is the solution’s ranking based on CDP, and the other is the solution’s ranking based on Pareto dominance.
The third category is the ε constrained method. The idea was originally proposed by Takahama and Sakai [5] in 2006. In this method, for a solution whose constraint violation degree is smaller than ε, the solution is regarded as feasible, otherwise, the solution is regarded as infeasible. The key to the ε constraint method also requires careful tuning of parameters to properly control the value of ε. Yang et al. [25] used the epsilon method to handle constraints and applied it to multi-objective evolutionary algorithm based on decomposition (MOEA/D). Fan et al. [26] proposed an improved epsilon constraint-handling mechanism and combined it with a decomposition-based multi-objective evolutionary algorithm, which adjusts the value of ε dynamically according to the ratio of feasible to total solutions in the current population.
The fourth category is the stochastic sorting (SR) approach. The idea of SR to deal with constraints is to introduce a random probability Pf to select the solutions with good fitness or low CV. In other words, given any pair of adjacent solutions, if both solutions are feasible, then the probability of comparing them (to determine which one is more suitable) according to the objective function is 1, otherwise, the probability of comparing solutions according to CV is Pf. Geng et al. [27] combined stochastic ranking with an infeasible elitist-based MOEA. Zhang et al. [28] proposed the dynamic stochastic selection, in which the probability parameter Pf in the stochastic ranking method could change dynamically during the evolution, and applied it to the framework of multimember differential evolution.
The fifth category is the multi-objective optimization approach. The main idea of multi-objective constraint handling is to treat the constraints as objectives, that is, a CMOP is transformed into an unconstrained multi-objective problem, and then the transformed problem is solved by the multi-objective optimization technique. Long et al. [29] designed a multi-objective optimization-based new constraint handling technique, by taking closeness, diversity, and feasibility as three objectives in a multi-objective subproblem. Zhou et al. [30] proposed a tri-goal evolution framework and designed two indicators for convergence and diversity, converting the constraints into the third indicator for feasibility to solve constrained many-objective problems.
The last category is the hybrid method approach. The hybrid method is the combination of two or more different CHTs to deal with constraints. Deb and Datta [31] proposed a CHT method that combines multi-objective optimization and penalty function methods to deal with constrained optimization problems. Qu et al. [9] employed an ensemble of three different constraint handling methods, including adaptive penalty function, the superiority of feasible solution, and the ε constrained method. In this method, the population is divided into three subpopulations, each of which adopts different CHTs. Wang and Cai [32] proposed an adaptive tradeoff model (ATM), which divides the evolutionary process of the population into three stages and adopts different strategies according to the characteristic of each stage.

Existing MOEAs with maintaining infeasible solutions

The information on infeasible solutions helps to explore undeveloped regions [12, 33], and to solve many challenging problems, including problems consisting of multiple disjoint feasible regions, or a huge infeasible barrier, and so on. Most evolutionary optimization algorithms focus on finding the optimal feasible solutions of the problem. Therefore, in the process of evolution, feasible solutions are always considered to be better than infeasible solutions.
The method of preserving the infeasible solution is adopted for the first time in solving constrained single-objective optimization problems. Mezura-Montes and Coello [34] suggested a simple multimembered evolutionary strategy (SMEs) that allows the best infeasible solution to remain in the population for the next generation. Ray et al. [35] proposed an infeasibility-driven evolutionary algorithm to solve CMOPs which maintains a small percentage of infeasible solutions close to the constraint boundaries during its course of evolution. Isaacs et al. [36] proposed a constraint handling method that maintains infeasible solutions in the population by reformulating the original m objective constrained minimization problem into an unconstrained m + 1 objective minimization problem, and an additional objective function is the number of constraint violations.
In recent years, some coevolutionary algorithms are proposed to solve CMOPs. In coevolutionary constraint handling techniques, there is usually an archive to save infeasible solutions. Tian et al. [33] proposed an evolutionary algorithm evolving two populations to solve CMOPs, in which a population evolves without considering constraints to maintain diversity, and the other population to solve the original CMOP to maintain feasibility as well as convergence. Li et al. [16] proposed a two-archive EA (C-TAEA) for constrained multi-objective optimization. In C-TAEA, the convergence-oriented archive (CA) is evolved to optimize the constraints and objectives and aims to push the population toward the Pareto front. The diversity-oriented archive (DA) is evolved to optimize only the objectives and aims to maintain the population diversity by exploring areas underexploited by the CA, including those infeasible regions. Sorkhabi et al.[37] proposed an efficient approach for constraint handling in multi-objective particle swarm optimization, which divides particles population into two non-overlapping populations, one is feasible population, the other is infeasible population, where feasible particles are evolved toward Pareto optimality, and infeasible particles are evolved toward feasible particles. Wang et al. [38] proposed a cooperative differential evolution framework (CCMODE) for constrained multi-objective optimization, including M subpopulations for constrained single-objective optimization and an archive population for constrained M-objective optimization.
Very recently, some CMOEAs using valuable information existing in objective functions are proposed, which a certain number of infeasible solutions are preserved in the population to enhance the convergence pressure at the early stage. Yu et al. [15] proposed a novel CMOEA named DSPCMDE, in which some valuable infeasible solutions are maintained, and the balance in objective functions and constraints is adjusted dynamically by considering the desired searching emphasis in different evolutionary stages. Tian et al. [39] proposed a two-stage CMOEA to solve CMOPs with different feasible regions. In one stage, some infeasible solutions are maintained to help the population cross infeasible regions, and the fitness evaluation strategies are adjusted during the evolutionary process to adaptively balance objective optimization and constraint satisfaction. Peng et al. [40] proposed a novel constraint-handling technique based on directed weights to deal with CMOPs, which utilizes the useful information contained in infeasible individuals to guide the search to the promising region.

The proposed algorithm

In this section, the proposed EGDCMO will be explained in detail. The general flow chart of this proposed EGDCMO is given in Fig. 1. The proposed EGDCMO starts with the random initialization of a population P with size N, and then the algorithm evolves from generation to generation until the maximal number of function evaluations (maxNFE) is reached. In each generation, the algorithm repeats the main following operations: (1) reproduction, (2) external archive update, (3) environmental selection of feasible solutions, and (4) an efficient global diversity strategy is used to select infeasible solutions for the next generation. It is worth noting that, EGDCMO maintains a certain number of infeasible solutions with diversity, which provides as much diversified information as possible. The specific demonstrations are summarized in the following contents.

Procedure of EGDCMO

Algorithm1 provides the pseudo code of the proposed EGDCMO. In Line 1, the proposed EGDCMO starts with the random initialization of a population P with size N. The main loop of the proposed EGDCMO is shown in Lines 4–33. Especially, for each generation, the reproduction is shown in Lines 5–6, which uses the binary tournament selection and the polynomial mutation to generate offspring. The external archive update is shown in Line 7, where the non-dominated feasible solutions are saved in the external archive. In Line 8, the population is divided into an infeasible set and a feasible set. Line 9 shows the number of infeasible solutions that need to be maintained. If the infeasible set has no more than Ninf solutions, all solutions from the infeasible set are selected. If the feasible set has no more than Nf solutions, all the feasible solutions are selected and the rest are filled with infeasible solutions, which are formulated as follows.
$$ N_{\inf } = \left\{ \begin{gathered} {\text{ceil}}(\alpha *N)\quad N_{1} \ge \alpha *N\;{\text{and}}\;N_{2} \; \ge (1 - \alpha )*N\;{\text{and}}\;{\text{NFE}} < \beta *\max \;{\text{NFE}} \hfill \\ N - N_{2} \quad \quad N_{2} \; < (1 - \alpha )*N\;{\text{and}}\;{\text{NFE}} < \beta *\max \;{\text{NFE}} \hfill \\ N_{1} \quad \quad \quad \quad N_{1} < \alpha *N\;{\text{and}}\;{\text{NFE}} < \beta *\max \;{\text{NFE}} \hfill \\ 0\quad \quad \quad \quad \quad {\text{NFE}} > \beta *\max \;{\text{NFE}} \hfill \\ \end{gathered} \right., $$
(3)
where N1 and N2 are the number of infeasible solutions and feasible solutions respectively, α is the proportion of infeasible solutions to maintain, and N is the population size. We set β = 0.8, and why we set it to 0.8 will be discussed in “Analysis of the proposed EGDCMO”. The uniformly distributed weight vectors are generated as shown in Line 10, and the number of evenly distributed weight vectors is equal to Ninf. Then, the fitness and the truncation method are used to select Nf solutions from Sf in the environment selection shown in Line 11. The determination of whether the value of the flag is 1 is shown in Lines 12–19. Specifically, when the average change of feasible population (Sf) within 150 generations on each objective is less than the given threshold value of 0.01, the value of the flag is 1. The flag is judged again after 150 generations. For the average value of Sf on the ith objective, we used Formula (4) to normalize the objective value of each feasible solution in Sf. Then, we used Formula (5) to calculate the average value of feasible solutions in Sf on each objective[13], and \(\left| {S_{{\text{f}}} } \right|\) is the number of solutions in feasible population Sf:
$$ f_{i}^{^{\prime}} = \frac{{f_{i} (x) - f_{i}^{\min } }}{{f_{i}^{\max } - f_{i}^{\min } }},\;x \in S_{{\text{f}}} , $$
(4)
$$ f_{i}^{^{\prime}} = \frac{{\sum\nolimits_{{x \in S_{{\text{f}}} }} {f_{i}^{^{\prime}} } (x)}}{{\left| {S_{{\text{f}}} } \right|}}. $$
(5)
In Lines 20–21, the Ninf infeasible solutions are selected from Sinf. Afterward, the fitness value of each solution in Sinf is calculated. In Lines 22–23, feasible and infeasible solutions are combined. To add selection pressure to generate better infeasible solutions [36, 41], the infeasible solutions are ranked higher than the feasible solutions to focus on the search in the undeveloped areas for better diversity. When the evolution has entered the final stage, that is, NFE = 0.8*maxNFE, and NFE is the number of function evaluations, the population is replaced by the solutions in the external archive obtained in the previous generation and make α  = 0. The loop executes until the maximal number of function evaluations is reached, and then returns S as the final result, as shown in line 32.

A new fitness function to evaluate infeasible solutions

An example ranking procedure using CDP is presented in Fig. 2. For the problem with strict constraints, all solutions in the population may infeasible in the initial stage. Then sort the solutions according to CV, that is, the smaller the CV, the higher the ranking, and select the best N solutions. Therefore, solutions with smaller CV can survive to the next generation, and make the population evolve to the feasible region promptly. Due to the lack of mechanisms to maintain population diversity, the population cannot approach the feasible region from different directions and may get trapped in a narrow feasible region. It is worth mentioning that retaining some “good” infeasible solutions in each generation helps to solve the problem where the optimal solutions lie on constraint boundaries.
To achieve the tradeoff between objective functions and constraints, and choose "good" infeasible solutions for the next generation, a natural way is to use the information of objective functions. Fortunately, we can use the method in SPEA2 [42] to calculate the fitness of infeasible solutions without considering any constraints, and the ranking of infeasible solution xi can be obtained (\(S_{i}^{F}\)) through Lines 1 of Algorithm 2, a smaller value of \(S_{i}^{F}\) represents the better performance of xi. Afterward, the other ranking of infeasible solution xi can be obtained (\(S_{i}^{{{\text{NSCV}}}}\)), in which the number of satisfied constraints (NS) and CV are used as two objectives (minimize CV, and maximize NS) for sorting based on Pareto dominance, as shown in Line 3–10 of Algorithm 2. Subsequently, to balance the importance of objectives and constraints, we use a new fitness function to evaluate infeasible solutions.
$$ \begin{gathered} \min \quad {\text{fitness}}_{i} = \alpha_{1} * S_{i}^{F} + \alpha_{2} * S_{i}^{{{\text{NSCV}}}} \hfill \\ \quad \quad \quad \alpha_{1} = 0.5 + 0.5 * P_{{\text{f}}} ,\quad \alpha_{2} = 1 - \;\alpha_{1} . \hfill \\ \end{gathered} $$
(6)
Note that, Pf is the proportion of feasible solutions. When handling infeasible solutions, the information of objectives is utilized and NS is also considered, and the larger the NS value is, the more likely the infeasible solution is to be located at the constrained boundary, especially when the problem has many constraints. Compared with CDP, which only considers minimizing CV, the new fitness function can evaluate the infeasible solution more comprehensively.
It is necessary to point out that although the fitness function to evaluate infeasible solutions in this paper has some similarities to that of the ToR in [14]. The main differences are as follows. In ToR, the fitness function of each solution is defined as the weighted sum of two rankings: one is the solution’s ranking based on CDP and the other is the solution’s ranking based on Pareto dominance. While in our method, the fitness function of infeasible solutions is defined as the weighted sum of two rankings: one is the solution’s ranking based on the fitness of infeasible solutions obtained by SPEA2 without considering any constraints and the other is a non-dominated sorting of these two aspects (minimize CV, maximize NS). In our method, when handling infeasible solutions, the information of objectives is utilized and NS is also considered, and the larger the NS value is, the more likely the infeasible solution is to be located at the constrained boundary [43], especially when the problem has many constraints. Compared with CDP, which only considers minimizing CV, the new fitness function can evaluate the infeasible solution more comprehensively.
In ToR the search is more biased toward constraints than objectives as the proportion of feasible solutions increases. In our method, when the proportion of feasible solutions increases, α1 is greater than α2. which means that the search is more biased toward objectives than constraints. That is to say, with the increase of α1, less and less information about constraints is considered, which helps explore the undeveloped areas for better diversity. Considering that more information on objective functions is helpful to the exploration both in the feasible and infeasible regions, and avoid stagnation in some local regions [15]. The fitness of currently infeasible solutions is related to the proportion of feasible solutions, and when the proportion of feasible solutions increases, the selection of infeasible solutions tends to utilize more objective function information.

An efficient global diversity strategy to select infeasible solutions for the next generation

The key component in EGDCMO is how to select Ninf infeasible solutions from Sinf. Before explaining the efficient global diversity strategy to select infeasible solutions, we first introduce the idea from MOEA/D-M2M [44] to divide the objective space into several subregions, each of which is represented by a unique weight vector generated by the canonical simplex-lattice design method [45]. Specifically, W = {w1, w2,…, wNinf} is a set of uniformly distributed weight vectors, and a subregion \(\Delta^{i}\) is defined as
$$ \begin{aligned} & \Delta^{i} = \left\{ {f(x) \in {\mathbb{R}}^{m} |\left\langle {f(x),{\text{w}}^{i} } \right\rangle \le \left\langle {f(x),{\text{w}}^{j} } \right\rangle } \right\}\;\\ &\quad {\text{for}}\;{\text{any}}\quad j = 1, \ldots ,N\} , \end{aligned} $$
(7)
where \(\left\langle {f(x),{\text{w}}} \right\rangle\) is the acute angle between f(x) and w. After the setup of subregions, solutions are assigned to them as follows. First, objective values of all solutions at the current generation are translated, which is calculated as
$$ \overline{f}_{i} (x) = \frac{{f_{i} (x) - f_{i}^{*} }}{{f_{i}^{{{\text{nad}}}} - f_{i}^{*} }}, $$
(8)
where \(f_{i}^{*}\) and \(f_{i}^{{{\text{nad}}}}\) are the estimated ideal and nadir points, which are the minimum and maximum values of objective values of fi at the current generation, respectively. Then, each solution X of the population is associated with a unique subregion, and the index of the subregion is determined as
$$ k = \mathop {\arg \min }\limits_{{i \in \{ 1, \ldots N\} }} \left\langle {\overline{f}({\text{x}}),{\text{W}}^{i} } \right\rangle . $$
(9)
The efficient global diversity strategy to select infeasible solutions for the next generation has two characteristics: (1) in the process of evolution, the way of selecting infeasible solutions according to the value of the flag and (2) at the early stage of evolution, the diversity of the population will be maintained when most solutions are infeasible. The pseudo code of selecting infeasible solutions is shown in Algorithm 3. Specifically, the value of the flag is judged according to the method introduced in "Procedure of EGDCMO". If flag = 1, that is, the change of the average value of the feasible population (Sf) on each objective is less than the given threshold value of 0.01 within 150 generations, which means the feasible population obtained in the process of evolution changes little in the objective value, and the feasible population may fall into a narrow feasible region. Then, the fitness and the truncation method are used to select Ninf solutions from Sinf without considering the constraints (lines 3 of Algorithm 3), which helps explore the undeveloped areas for better diversity and jump over the infeasible band to evolve toward the global PF. At the early stage of evolution, flag = 0, the population may only contain infeasible solutions. Then, we separately associate each solution in Sinf with its corresponding subregion according to the method mentioned above. The next step is to determine which solutions should be selected, we iteratively investigate each subregion and decide the survival of solutions in Sinf to the next generation. In particular, in each iteration, only one solution can survive in each subregion. In other words, for \(\Delta_{W}^{i} ,\;i \in \{ 1, \ldots ,N_{\inf } \}\), the solutions in Sinf associated with \(\Delta_{W}^{i}\), denoted as Xi, their fitness values will be calculated by Formula (6), and the solution with the smallest fitness value will be chosen to survive to the next generation (lines 6–18 of Algorithm 3). Here, the best solution xb is identified as
$$ x^{b} = \mathop {\arg \min }\limits_{{x \in X^{i} }} \{ {\text{fitness}}_{i} \} . $$
(10)
This iterative investigation continues until the infeasible population Sinf is filled.

Analysis of the proposed EGDCMO

To analyze the effectiveness and understand the mechanisms to maintain population diversity of the proposed EGDCMO, we presented a hypothetical population with ten candidate solutions for the MW11 [46] problem, and the best five solutions will be selected. As shown in Fig. 3a and Table 1, when all solutions are infeasible, if CDP is adopted, A, B, C, D, and F are selected, since they have a smaller CV. Solutions that perform well in the object space are eliminated, this will make the population unable to approach the feasible region from different directions and may get trapped into a narrow feasible region. From Fig. 3b, the objective space is divided into several subregions, note that, the number of subregions depends on the number of infeasible solutions that need to be maintained, and each solution is associated with a subregion. Afterward, we traverse each subregion and select a solution from each subregion until the quantity requirements are met. If more than one solution has been associated with \(\Omega_{C}^{i}\), the solution with the smallest fitness value will be chosen. For example, three infeasible solutions A, B, and C in \(\Omega_{C}^{1}\), their fitness values are calculated by the formula (6), and solution A with the smallest fitness value is selected. It can be found that since formula (6) introduces the information of objectives, the selected solution A is located near the constraint PF. Figure 3 and Table 2 present an example to illustrate this issue. Finally, A, F, H, I, and J are selected based on the proposed EGDCMO, where H, I, and J have the opportunity to approach other feasible regions and maintain population diversity.
Table 1
The best five solutions selected by CDP for the next generation on MW11 are highlighted in gray
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab1_HTML.png
Table 2
The best five solutions selected by EGDCMO for the next generation on MW11 are highlighted in gray
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab2_HTML.png
Figure 4 depicts the hypothetical distributions of ten candidate solutions on 2-objective LIR-COMP10, which has a band of the infeasible region, posing challenges to MOEAs in terms of convergence. As shown in Fig. 4a and Table 3, if CDP is adopted, F, G, H, I, and J are selected. Since CDP prefers feasible solutions and ignores some infeasible solutions that perform well in the objective space, it has trouble in jumping over the infeasible band. The proposed EGDCMO uses an efficient global diversity strategy to maintain Ninf infeasible solutions, where Ninf is calculated by the formula (3), and α is set to 0.5. From Fig. 4b, the objective space is divided into several subregions, and the solution with the smallest fitness value in each subregion will be selected. For example, infeasible solutions B, and C in \(\Omega_{C}^{2}\), if only the constraint value is considered to determine the survival of the solution, C with the smallest CV value is selected instead of solution B with better convergence. We use the method introduced in "A new fitness function to evaluate infeasible solutions" to evaluate infeasible solutions in each subregion, which helps balance the importance of constraints and objectives. In this way, solution B with better convergence is selected (Table 4).
Table 3
The best five solutions selected by CDP on LIR-CMOP10 for the next generation are highlighted in gray
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab3_HTML.png
Table 4
The best five solutions selected by EGDCMO on LIR-CMOP10 for the next generation are highlighted in gray
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab4_HTML.png
Next, to show the superiority of the proposed EGDCMO dealing with CMOPs with complex constraints. The simulated binary crossover [47] and polynomial mutation [48] are adopted for generating offspring. The probability of simulated binary crossover is set to 1, and the probability of polynomial mutation is set to 1/D (D denotes the number of decision variables). The proposed EGDCMO is compared to NSGA-II for detailed discussion. The populations in the early, middle, and last of NSGA-II and EGDCMO are plotted in Fig. 5. The MW11 problem has a very small feasible region (the size of feasible region < 0.1‰). As shown in the first column of Fig. 5, the solutions of NSGA-II are infeasible in the early. Due to the lack of mechanisms to maintain population diversity, the population only converges to a single feasible region. In the middle generations, some "good" infeasible solutions appear. Since CDP prefers feasible solutions, these infeasible solutions cannot survive to the next generation, failing to explore new areas of the feasible region until the algorithm stops. When EGDCMO handles infeasible solutions, the objective space is divided into several subregions, and the information on objectives is utilized, which achieves the tradeoff between objectives and constraints in each subregion. Thus, the population can approach the feasible region from different directions. Some "good" infeasible solutions that balance the importance of constraints and objectives are retained, which is helpful to search the Pareto optimal solutions located on the boundary of the feasible region and explore the undeveloped areas for better diversity.
Figure 6 shows the populations in the early, middle, and last generations of NSGA-II and EGDCMO on LIR-CMOP10 [26], which has a particularly complex constrained landscape and a huge infeasible barrier. As shown in the first column of Fig. 6, when the number of feasible solutions is greater than the population size, the infeasible solutions with better convergence are eliminated since CDP prefers feasible solutions, which is difficult to generate offspring that can jump out of the infeasible barrier. Finally, the solutions obtained by NSGA-II are completely stuck at the boundary of the infeasible barrier. EGDCMO explicitly maintains Ninf infeasible solutions which with better convergence, and Ninf is calculated by the formula (3). From the second column in Fig. 6, we can find that many infeasible solutions are located in the infeasible barrier, which helps to generate offspring that jump over the infeasible barrier and evolve toward the global PF.
To analyze the influence of parameters on the performance of the proposed EGDCMO, β is set to 0.1, 0.3, 0.5, 0.7, 0.8, and 0.9, and we performed a comparative study on the problems with a highly multimodal landscape and a band of the infeasible region, that is DC2-DTLZ3 and DC3-DTLZ3 problems. Figure 7 plots Pareto fronts with median IGD value among 30 runs obtained by EGDCMO on DC3-DTLZ3 corresponding to different values of β, it can be observed that when β is set to 0.8 and 0.9, EGDCMO can jump over the infeasible band and find feasible solutions with the optimal objective value on DC3-DTLZ3. It can be concluded that the larger the β value, the better EGDCMO can fully explore in underdeveloped areas and jump out of local optimal. Figure 8 plots the changes in the inverted generational distance (IGD) values of the proposed EGDCMO on DC2-DTLZ3 and DC3-DTLZ3 corresponding to different values of β, it can be observed that when β is set to 0.8 and 0.9, EGDCMO can obtain good results. Considering that the last state is to strengthen the convergence of the population, β is set to 0.8 in this paper to ensure that the EGDCMO has sufficient convergence.
The same comparative study is used to analyze the effect of α (i.e., the proportion of infeasible solutions to maintain) on EGDCMO, α is set to 0.1, 0.3, 0.5, 0.7, and 0.9. The Pareto fronts of EGDCMO corresponding to different values of α on DC3-DTLZ3 with the median IGD are shown in Fig. 9. From the figure, it is obvious that when α is set to 0.5, 0.7, and 0.9, EGDCMO can find the optimal feasible region and has better convergence performance. According to the convergence profiles of IGD values shown in Fig. 10, as can be seen from the figure, EGDCMO has poor convergence performance on DC2-DTLZ3 when α is set too small or too large, and EGDCMO converges faster when α is set to 0.5. Considering that some offspring solutions are generated around infeasible solutions, it is likely to generate some solutions that can jump out of the infeasible region, which is helpful to explore feasible and infeasible regions. At the same time, to ensure that there are enough feasible solutions to explore the feasible region and consider the tradeoff between diversity and convergence, α is set to 0.5 in this paper.

Experimental study

In this section, to verify the performance of the proposed EGDCMO, the proposed EGDCMO is compared with six other CMOEAs, including NSGA-II, ToP, MOEA/D-DAE, C-TAEA, PPS, and DSPCMDE on the constrained DTLZ test suite, the MW test suite and the LIR-CMOP test suite. All our experiments are executed on the evolutionary multi-objective optimization platform PlatEMO [49].

Parameter settings

1.
Problems: The MW [46] problems, constrained DTLZ [50] problems and LIR-CMOP [26] problems are the benchmark problems adopted in this paper. The number of decision variables D and the number of objectives M of each benchmark problem are set as follows. For the constrained DTLZ, D = 7 and M = 3 for C1-DTLZ1, DC1-DTLZ1, DC2-DTLZ1, and DC3-DTLZ1, D = 30 and M = 3 for DTLZ8, D = 20 and M = 2 for DTLZ9, other problems of DTLZ are D = 12 and M = 3. For the MW problems, D = 15 and M = 2 for all problems except for MW-4, MW-8, MW-14, for where M is set to 3. For the LIR-CMOP problems, D = 10 and M = 3 for LIR-CMOP13 and LIR-CMOP14, D = 10 and M = 2 for the remaining problems. The number of function evaluations for constrained DTLZ, MW, and LIR-CMOP is set to 100,000, 100,000, and 300,000, respectively.
 
2.
Compared algorithms: Four other CMOEAs, including NSGA-II [5], ToP [11], MOEA/D-DAE [12], C-TAEA [16], PPS [10], and DSPCMDE [15] are adopted in this paper to verify the performance of the proposed EGDCMO. The parameters of all the compared CMOEAs are set as suggested in their original papers. For ToP, it is embedded in the NSGA-II-CDP, where the first phase ended when the feasibility proportion Pf is larger than 1/3 or the difference δ is less than 0.2. In addition, the population size is set to 100 on problems with two objectives and 105 on problems with three objectives of all the comparison algorithms and the proposed algorithm in our experiment.
 
3.
Genetic operators: NSGA-II, MOEA/D-DAE, C-TAEA, and EGDCMO adopt the simulated binary crossover and the polynomial mutation to generate offspring, DSPCMDE uses two popular DE [15] operators to generate offspring, whereas PPS and ToP use differential evolution (DE) [51] and polynomial mutation. The crossover probability is set to 1, the probability of polynomial mutation is set to 1/D (D denotes the number of decision variables), and the distribution index of both the crossover and the mutation is set to 20, respectively.
 
Performance metrics: Due to the true pareto fronts (PFs) of benchmark problems are known, the inverted generational distance (IGD) [52] metric is employed to assess the performance of all the CMOEAs, and approximately 10,000 reference points are sampled on the PF of the problem to calculate IGD. Hypervolume (HV) [53] metric is also used in our experiment. A smaller IGD or a larger HV value indicates a better approximation to the true PF.

Experimental results on the benchmark problems

Table 5 shows the mean value and standard deviation of the IGD values obtained by NSGA-II, ToP, MOEA/D-DAE, C-TAEA, PPS, DSPCMDE, and EGDCMO on the DTLZ and MW problems for 30 independent runs. The Wilcoxon rank sum test at a 0.05 level is presented, where “+”, “−” and “≈” denote that the result is significantly better, significantly worse, and significantly similar to that obtained by EGDCMO, respectively. As shown in Table 5, the proposed EGDCMO obtains 18 best results in all 26 problems, which is followed by C-TAEA and MOEA/D-DAE achieving five and three best results respectively, whereas both NSGAII, ToP, PPS, and DSPCMDE achieve no best results. It also can be found that the results of EGDCMO are significantly better than that of NSGA-II, ToP, MOEA/D-DAE, C-TAEA, PPS, and DSPCMDE on 26, 26, 21, 20, 26, and 26 problems, respectively. Table 6 presents the comparison results of EGDCMO and its compared algorithms in terms of HV. According to the statistical results shown in Table 6, the numbers of best results obtained by NSGA-II, ToP, MOEA/D-DAE, C-TAEA, PPS, and DSPCMDE are 0, 0, 2, 5, 2, 3, and 14, respectively. Besides, it can be found that six compared CMOEAs are significantly worse than EGDCMO on 24, 25, 19, 20, 21, and 21 test problems, respectively.
Table 5
Statistics of the IGD metric obtained by the proposed EGDCMO and all the compared CMOEAs on the DTLZ and MW problems
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab5_HTML.png
The best result in each row is highlighted. ‘‘N/A” indicates that no feasible solution is found
Table 6
Statistics of the HV metric obtained by the proposed EGDCMO and all the compared CMOEAs on the DTLZ and MW problems
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab6_HTML.png
The best result in each row is highlighted. ‘‘N/A” indicates that no feasible solution is found
Figures 11, 12, and 13 plot Pareto fronts with median IGD value among 30 runs obtained by the seven CMOEAs on DC3-DLZ3, MW5, and MW14, respectively. The red points are the true PF. For DC3-DTLZ3 with a highly multimodal landscape and a band of the infeasible region, it can be seen from the figure that NSGA-II, ToP, PPS, and DSPCMDE have trouble jumping over the infeasible band and cannot find feasible solutions with the optimal objective value. The MOEA/D-DAE can find the optimal feasible region, but its performance in population diversity is poor. Since a diversity-oriented archive (DA) is evolved by optimizing only the objectives, C-TAEA can find the optimal solutions with diversity. The proposed EGDCMO maintains a certain number of infeasible solutions in the evolutionary process, and the diversity of infeasible solutions is also considered, which is helpful to cross the local optimum and evolve toward the global PF. For MW5 and MW8 with discontinuous and small feasible regions, EGDCMO and MOEA/D-DAE exhibit better diversity performance than the other CMOEAs, whereas ToP cannot find any feasible solutions in MW5. The changes in the IGD values of the proposed EGDCMO and all the compared CMOEAs during the evolution process on the DC3-DTLZ3, MW5, and MW9 problems are shown in Fig. 14. Note that there is no IGD curve for ToP on MW5 and MW9 because no feasible solution can be found during the entire evolution of ToP on MW5, and on the MW9 problem, ToP usually only finds feasible solutions in the last generation or cannot find feasible solutions. It can be found that the proposed EGDCMO can maintain a low IGD value throughout the evolution process compared with other compared MOEAs.
In order to further compare the performance of the proposed EGDCMO, EGDCMO and all the compared algorithms are tested on more complex CMOPs, namely LIR-CMOP, which contains small feasible regions and complicated linkages between position and distance variables. Table 7 shows the statistics of the IGD metric for the LIR-CMOP problems. According to the experimental results, we can see that EGDCMO, PPS, and DSPCMDE perform equally well; both PPS and MOEA/D-DAE achieve 4 best results, which is followed by EGDCMO and DSPCMDE achieving 3 best results, whereas the remaining three compared CMOEAs achieve no best results. Based on the results of Wilcoxon’s rank sum test, we can observe that EGDCMO is significantly better than NSGA-II, ToP, MOEA/D-DAE, C-TAEA, PPS, and DSPCMDE on 14, 12, 8, 13, 6, and 6 test problems, respectively. For the problems LIR-CMOP5 to LIR-CMOP12 with huge infeasible barrier and discontinuous Pareto front. According to the global diversity strategy for selecting infeasible solutions for the next generation, the EGDCMO can cross the infeasible barrier and achieve good performance. Owing to MOEA/D-DAE, C-TAEA, PPS, and DSPCMDE having a DAE mechanism, a diversity-oriented archive (DA), a push–pull search stage and a dynamic selection preference strategy, respectively, they have the ability to get across the large infeasible barrier and achieve good performance.
Table 7
Statistics of the IGD metric obtained by the proposed EGDCMO and all the compared CMOEAs on the LIR-CMOP problems
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab7_HTML.png
The best result in each row is highlighted. “N/A” indicates that no feasible solution is found
From the results regarding HV shown in Table 8 of all the CMOEAs on LIR-CMOP problems, we can see that EGDCMO are significantly better than that of NSGA-II, ToP, MOEA/D-DAE, C-TAEA, PPS, and DSPCMDE on 14, 12, 7, 13, 5, and 8 test problems, respectively. Figure 15 plots Pareto fronts with median IGD value among 30 runs obtained by the seven CMOEAs on LIR-CMOP9. Although NSGA-II and ToP can cross the infeasible barrier, they cannot jump out of the local optimum. The EGDCMO and all the other four CMOEAs can cross the infeasible barrier, and evolve toward the global PF in the end.
Table 8
Statistics of the HV metric obtained by the proposed EGDCMO and all the compared CMOEAs on the LIR-CMOP problems
https://static-content.springer.com/image/art%3A10.1007%2Fs40747-022-00851-1/MediaObjects/40747_2022_851_Tab8_HTML.png
The best result in each row is highlighted. “N/A” indicates that no feasible solution is found

Experimental results on an actual engineering application

Finally, to verify the performance of EGDCMO in solving practical applications, we performed experiments on a multi-objective engineering problem, i.e., structure optimization of a blended-wing-body underwater glider (BWBUG) [54]. When the BWBUG is lifted from the water, stress concentration may arise in the skeleton structure because of the vertical downwards force on the two wings. In the meantime, the frame structure of the glider is equivalent to the cantilever beam structure, and three parts should be considered in the analysis, which involves the gravity of the skeleton, equipment, and buoyancy material. For the equipment, given the specific tasks and functions of this BWBUG, the concentrated force is set to 500 N and the distributed force is set to 4500 N. On the other hand, for the buoyancy material and other accessories, the distributed force is set to 1000 N.
For BWBUG, the larger the internal volume, the more energy, and other equipment it can carry, and at the same time, the weight of the skeleton will increase. In practical use, we hope that the mass of the skeleton will be as light as possible when the stress constraints are met. Therefore, the skeleton mass should be minimized and the internal volume should be maximized in the optimization of BWBUG. In this structure optimization, the internal layout of BWBUG can be changed, that is, the internal layout will affect the shape. Internal devices are not allowed to exceed the envelope surface of the shape, which is called layout constraint. In order to obtain minimizing skeleton mass and maximize the internal volume, and meanwhile satisfy the stress and layout constraints, the specific design parameters and optimization formula are summarized as follows.
Figure 16 shows the shape and layout of the BWBUG parameter diagram, and the shape of the BWBUGs can be divided into plane shape and wing shape. The class and Shape Transformation (CST) method [55]. Reference [56] is used for wing parameterization. There are 20 design variables, including 5 section CST variables, 11 plane shape design variables, and 6 internal layout design variables. Parameter definitions are listed in Table 9. To be specific,\(D_{1}\) and \(L\) are fixed, that is, the total length and width of this BWBUG are 1000 mm and 3000 mm, respectively. The channel of connection buoyancy-regulating cabin and battery cabins are distributed on both sides of the buoyancy-regulating cabin, and the radius and length are set to 20 mm.
Table 9
Parameter definition of BWBUG
Parameter/mm
A1, A2, A3, A4, A5
L, D1
D2, D3
D4, D5
Definition
Section CST parameter
The maximum wingspan and body width
Length of section
Front point spacing of section
Parameter/mm
Z1, Z2, Z3, Z4, Z5
L1, L2
L3, L4
D1, D2
Definition
Spacing of Bezier control point and section3
Spacing of device and glider
The length of buoyancy control cabin and battery cabins
Diameter of buoyancy control cabin and battery cabins
Figure 17 shows the skeleton full parameter diagram of BWBUG. The skeleton is generated based on the shape, and the overall frame structure of the glider is assembled from the internal frame structure. The internal skeleton could fall into two parts, i.e., fuselage and wing. The generated fuselage is combined with the wing beam and wing rib structure to get the internal structure of the fuselage and wing skeleton.
Subsequently, the skeleton is defined by 5 geometric parameters in Fig. 17. To be specific, t denotes the decision variable, and others relate to geometrical parameters of the shape. Finally, the minimum mass of the skeleton and the maximum internal volume are taken as the two objectives and the stress and layout as two constraints. The expression is as follows:
$$ \begin{gathered} \min \quad F({\mathbf{x}}) = ( - V,{\text{M}}_{{\text{sk}}} )^{{\text{T}}} \hfill \\ \text{s.t.}\quad \quad \text{Inter}V \le {0 } \hfill \\ \quad \quad \quad \sigma_{\max } - \sigma_{\text{s}} \le 0 \hfill \\ \quad \quad \quad x_{i} \in \left[ {A_{{\text{Section}}} ,A_{{\text{Plane}}} ,A_{{\text{Layout}}} ,A_{{\text{skeleton}}} } \right],i = 1,2,...21 \hfill \\ \quad \quad \quad A_{{\text{Section}}} \in [A_{{\text{Low}}} ,A_{{\text{Up}}} ],\;A_{{\text{Plane}}} \in [P_{{\text{Low}}} ,P_{{\text{Up}}} ] \hfill \\ \quad \quad \quad A_{{\text{Layout}}} \in [L_{{\text{Low}}} ,L_{{\text{Up}}} ],\;A_{{\text{skeleton}}} \in [S_{{\text{Low}}} ,S_{{\text{Up}}} ], \hfill \\ \end{gathered} $$
(11)
where \(m_{{\text{sk}}}\) is the mass of the skeleton structure, V is the internal volume, InterV is the interference volume, \(\sigma_{\max }\) is the maximal equivalent stress and \(\sigma_{\text{s}}\) is the strength of the aluminum alloy. Considering the safety factor \(\sigma_{\text{s}}\) is set to 232 Mpa. According to the initial shape of BWBUG, NACA0020 is selected as the basic airfoil of each section in the improved CST parameterization method. Each airfoil design has the same design variables, and the design range needs to be determined first, as shown in (12):
$$ {\text{Section}}\left\{ {\begin{array}{*{20}l} {A_{{{\text{Low1}}}} = [ - {0}{{.0860, }} - {0}{{.0741, }} - {0}{{.0835, }} - {0}{{.0574, }} - {0}{{.0861}}]} \\ {A_{{{\text{Up1}}}} = [{0}{{.0860, 0}}{.0741, 0}{{.0835, 0}}{.0574,0}{{.0861}}]} \\ \end{array} } \right., $$
(12)
where ALow1 denotes the lower boundaries, and AUp1 represents the upper boundaries of the design scopes, respectively. In addition, the ranges of plane shape variables, layout variables, and skeleton variables are shown in (13), (14), and (15), respectively:
$$ \begin{gathered} {\text{Plane}}\left\{ \begin{gathered} P_{{{\text{low}}}} = \left[ {350,\;150,\;350,1000,150,300,500,300,150} \right] \hfill \\ P_{{{\text{up}}}} = \left[ {450,300,400,1200,250,450,600,450,250} \right] \hfill \\ \end{gathered} \right. \hfill \\ \quad \quad \quad (D_{2} ,D_{3} ,D_{4} ,D_{5} ,Z_{1} ,Z_{2} ,Z_{3} ,Z_{4} ,Z_{5} ) \hfill \\ \end{gathered} $$
(13)
$$ \begin{gathered} {\text{Layout}}\left\{ \begin{gathered} L_{{{\text{low}}}} = \left[ {60,\;0,\;500,\;300,50,40} \right] \hfill \\ L_{{{\text{up}}}} = \left[ {260,100,700,500,100,60} \right] \hfill \\ \end{gathered} \right. \hfill \\ \quad \quad \quad (L_{1} ,L_{2} ,L_{3} ,L_{4} ,D_{1} /2,D_{2} /2) \hfill \\ \end{gathered} $$
(14)
$$ {\text{Skeleton}}\left\{ \begin{gathered} S_{{{\text{low}}}} = \left[ 5 \right] \hfill \\ S_{{{\text{up}}}} = \left[ {10} \right] \hfill \\ \end{gathered} \right.. $$
(15)
In the optimization process, finite element analysis is used to simulate this actual case. In the present study, the material is aluminum alloy (material density: 2700 kg/m3; elastic modulus: 71,000 Mpa; Poisson's ratio: 0.33). Since the wing and fuselage are separated in modeling, the contact between the wing and fuselage is performed in finite element analysis. In terms of IGD, the top three algorithms for obtaining the best results in the benchmark cases are EGDCMO, C-TAEA, and MOEA/D-DAE, we use them to solve this engineering problem. Both EGDCMO, C-TAEA, and MOEA/D-DAE used 3000 FEs to attain the Pareto set, and the population size is set to 30. For a fair comparison, EGDCMO, C-TAEA, and MOEA/D-DAE used the same initial population. Furthermore, the results are summarized in Fig. 18, where the blue dots, black plus signs, and green squares are Pareto front of EGDCMO, C-TAEA, and MOEA/D-DAE, respectively. The red dots are the Pareto frontier obtained from the union of EGDCMO, C-TAEA, and MOEA/D-DAE’s Pareto sets. Relatively, 30 Pareto solutions are obtained by EGDCMO, and EGDCMO generated more non-dominated points with larger volume values. 20 Pareto solutions are obtained by C-TAEA, and C-TAEA generated more non-dominated points with larger volume values, while MOEA/D-DAE obtained no Pareto solutions. Moreover, 4 representative points located in the final Pareto front are selected, and their optimized geometric models are shown in Fig. 18 as well. Furthermore, Fig. 19 shows the detailed solutions obtained by EGDCMO, where the blue dots represent feasible designs, pink stars refer to the infeasible solutions, red squares are the infeasible solutions of DOE, and green squares are the infeasible solutions of DOE, and the red circles are Pareto front. Figure 20 shows the plane and section of 4 representative results. Specifically, section1, section2, section3, and section4 of each optimized glider have the same normalized section.
Correspondingly, the equivalent stress diagrams are provided in Fig. 21. In summary, EGDCMO cannot only deal with complex benchmark cases but also can efficiently tackle the actual engineering application.

Conclusions

This paper has proposed an efficient global diversity CMOEA to solve CMOPs with small feasible regions and complex constraints. Specifically, an efficient global diversity strategy is proposed to maintain infeasible solutions well-distributed feature at the early stage of evolution. To this end, a set of weight vectors are used to specify several subregions in the objective space, we iteratively investigate each subregion. A new fitness function is suggested in the proposed algorithm to evaluate infeasible solutions and decide the survival of infeasible solutions in each subregion, which is capable of balancing the importance of constraints and objectives. In addition, the infeasible solutions are ranked higher than the feasible solutions to focus on the search in the undeveloped areas for better diversity.
In the experimental part, the proposed algorithm has been tested on three benchmark cases and has been compared with four existing CMOEAs. The results show that the proposed algorithm has got impressive results and has better overall performance than the compared MOEAs on benchmark CMOPs, especially when the CMOPs are composed of multiple disjoint parts in the feasible region. Moreover, the proposed algorithm has been tested on an actual engineering application, which has also shown better performance than existing MOEAs.

Acknowledgements

Supports from National Natural Science Foundation of China (Grant nos. 52175251 and 51875466), are gratefully acknowledged. Besides, the research work is also supported by Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (Grant nos. CX2021051, CX2022007).

Declarations

Conflict of interest

The authors declare that they have no conflict of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Xiang Y, Yang X, Zhou Y, Huang H (2020) Enhancing decomposition-based algorithms by estimation of distribution for constrained optimal software product selection. IEEE Trans Evol Comput 24(2):245–259 Xiang Y, Yang X, Zhou Y, Huang H (2020) Enhancing decomposition-based algorithms by estimation of distribution for constrained optimal software product selection. IEEE Trans Evol Comput 24(2):245–259
2.
Zurück zum Zitat Yin Y, Zhao YH, Li H, Dong XJ (2021) Multi-objective evolutionary clustering for large-scale dynamic community detection. Inf Sci 549:269–287MathSciNetMATH Yin Y, Zhao YH, Li H, Dong XJ (2021) Multi-objective evolutionary clustering for large-scale dynamic community detection. Inf Sci 549:269–287MathSciNetMATH
3.
Zurück zum Zitat Fan Z, Yi F, Li W, Lu J, Wei C (2017) A comparative study of constrained multi-objective evolutionary algorithms on constrained multi-objective optimization problems. In: IEEE congress on evolutionary computation (CEC), pp 209–216 Fan Z, Yi F, Li W, Lu J, Wei C (2017) A comparative study of constrained multi-objective evolutionary algorithms on constrained multi-objective optimization problems. In: IEEE congress on evolutionary computation (CEC), pp 209–216
4.
Zurück zum Zitat Tessema B, Yen GG (2006) A self adaptive penalty function based algorithm for constrained optimization. In: 2006 IEEE international conference on evolutionary computation, pp 246–253 Tessema B, Yen GG (2006) A self adaptive penalty function based algorithm for constrained optimization. In: 2006 IEEE international conference on evolutionary computation, pp 246–253
5.
Zurück zum Zitat Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197 Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
6.
Zurück zum Zitat Takahama T, Sakai S (2006) Constrained optimization by the ε constrained differential evolution with gradient-based mutation and feasible elites. In: 2006 IEEE international conference on evolutionary computation, pp 1–8 Takahama T, Sakai S (2006) Constrained optimization by the ε constrained differential evolution with gradient-based mutation and feasible elites. In: 2006 IEEE international conference on evolutionary computation, pp 1–8
7.
Zurück zum Zitat Runarsson TP, Yao X (2000) Stochastic ranking for constrained evolutionary optimization. IEEE Trans Evol Comput 4(3):284–294 Runarsson TP, Yao X (2000) Stochastic ranking for constrained evolutionary optimization. IEEE Trans Evol Comput 4(3):284–294
8.
Zurück zum Zitat Cai Z, Yong W (2006) A multiobjective optimization-based evolutionary algorithm for constrained optimization. IEEE Trans Evol Comput 10(6):658–675 Cai Z, Yong W (2006) A multiobjective optimization-based evolutionary algorithm for constrained optimization. IEEE Trans Evol Comput 10(6):658–675
9.
Zurück zum Zitat Qu BY, Suganthan PN (2011) Constrained multi-objective optimization algorithm with an ensemble of constraint handling methods. Eng Optim 43(4):403–416MathSciNet Qu BY, Suganthan PN (2011) Constrained multi-objective optimization algorithm with an ensemble of constraint handling methods. Eng Optim 43(4):403–416MathSciNet
10.
Zurück zum Zitat Fan Z, Li WJ, Cai XY, Li H, Wei CM, Zhang QF, Deb K, Goodman E (2019) Push and pull search for solving constrained multi-objective optimization problems. Swarm Evol Comput 44:665–679 Fan Z, Li WJ, Cai XY, Li H, Wei CM, Zhang QF, Deb K, Goodman E (2019) Push and pull search for solving constrained multi-objective optimization problems. Swarm Evol Comput 44:665–679
11.
Zurück zum Zitat Liu ZZ, Wang Y (2019) Handling constrained multiobjective optimization problems with constraints in both the decision and objective spaces. IEEE Trans Evol Comput 23(5):870–884 Liu ZZ, Wang Y (2019) Handling constrained multiobjective optimization problems with constraints in both the decision and objective spaces. IEEE Trans Evol Comput 23(5):870–884
12.
Zurück zum Zitat Zhu QL, Zhang QF, Lin QZ (2020) A constrained multiobjective evolutionary algorithm with detect-and-escape strategy. IEEE Trans Evol Comput 24(5):938–947 Zhu QL, Zhang QF, Lin QZ (2020) A constrained multiobjective evolutionary algorithm with detect-and-escape strategy. IEEE Trans Evol Comput 24(5):938–947
13.
Zurück zum Zitat Ma HP, Wei HY, Tian Y, Cheng R, Zhang XY (2021) A multi-stage evolutionary algorithm for multi-objective optimization with complex constraints. Inf Sci 560:68–91MathSciNetMATH Ma HP, Wei HY, Tian Y, Cheng R, Zhang XY (2021) A multi-stage evolutionary algorithm for multi-objective optimization with complex constraints. Inf Sci 560:68–91MathSciNetMATH
14.
Zurück zum Zitat Ma ZW, Wang Y, Song W (2021) A new fitness function with two rankings for evolutionary constrained multiobjective optimization. IEEE Trans Syst Man Cybern 51(8):5005–5016 Ma ZW, Wang Y, Song W (2021) A new fitness function with two rankings for evolutionary constrained multiobjective optimization. IEEE Trans Syst Man Cybern 51(8):5005–5016
15.
Zurück zum Zitat Yu KJ, Liang J, Qu BY, Luo Y, Yue CT (2022) Dynamic selection preference-assisted constrained multiobjective differential evolution. IEEE Trans Syst Man Cybern 52(5):2954–2965 Yu KJ, Liang J, Qu BY, Luo Y, Yue CT (2022) Dynamic selection preference-assisted constrained multiobjective differential evolution. IEEE Trans Syst Man Cybern 52(5):2954–2965
16.
Zurück zum Zitat Li K, Chen RZ, Fu GT, Yao X (2019) Two-archive evolutionary algorithm for constrained multiobjective optimization. IEEE Trans Evol Comput 23(2):303–315 Li K, Chen RZ, Fu GT, Yao X (2019) Two-archive evolutionary algorithm for constrained multiobjective optimization. IEEE Trans Evol Comput 23(2):303–315
17.
Zurück zum Zitat Dadios EP, Ashraf J (2006) Genetic algorithm with adaptive and dynamic penalty functions for the selection of cleaner production measures: a constrained optimization problem. Clean Technol Environ Policy 8(2):85–95 Dadios EP, Ashraf J (2006) Genetic algorithm with adaptive and dynamic penalty functions for the selection of cleaner production measures: a constrained optimization problem. Clean Technol Environ Policy 8(2):85–95
18.
Zurück zum Zitat Jan MA, Tairan N, Khanum RA (2013) Threshold based dynamic and adaptive penalty functions for constrained multiobjective optimization. In: 2013 1st international conference on artificial intelligence, modelling and simulation, pp 49–54 Jan MA, Tairan N, Khanum RA (2013) Threshold based dynamic and adaptive penalty functions for constrained multiobjective optimization. In: 2013 1st international conference on artificial intelligence, modelling and simulation, pp 49–54
19.
Zurück zum Zitat Homaifar A, QiLai CXSH (1994) Constrained optimization via genetic algorithms. SIMULATION 62(4):242–253 Homaifar A, QiLai CXSH (1994) Constrained optimization via genetic algorithms. SIMULATION 62(4):242–253
20.
Zurück zum Zitat Farmani R, Wright JA (2003) Self-adaptive fitness formulation for constrained optimization. IEEE Trans Evol Comput 7(5):445–455 Farmani R, Wright JA (2003) Self-adaptive fitness formulation for constrained optimization. IEEE Trans Evol Comput 7(5):445–455
21.
Zurück zum Zitat Saha C, Das S, Pal K, Mukherjee S (2016) A fuzzy rule-based penalty function approach for constrained evolutionary optimization. IEEE Trans Cybern 46(12):2953–2965 Saha C, Das S, Pal K, Mukherjee S (2016) A fuzzy rule-based penalty function approach for constrained evolutionary optimization. IEEE Trans Cybern 46(12):2953–2965
22.
Zurück zum Zitat Woldesenbet YG, Yen GG, Tessema BG (2009) Constraint handling in multiobjective evolutionary optimization. IEEE Trans Evol Comput 13(3):514–525 Woldesenbet YG, Yen GG, Tessema BG (2009) Constraint handling in multiobjective evolutionary optimization. IEEE Trans Evol Comput 13(3):514–525
23.
Zurück zum Zitat Jan MA, Zhang Q (2010) MOEA/D for constrained multiobjective optimization: some preliminary experimental results. In: 2010 UK workshop on computational intelligence (UKCI), pp 1–6 Jan MA, Zhang Q (2010) MOEA/D for constrained multiobjective optimization: some preliminary experimental results. In: 2010 UK workshop on computational intelligence (UKCI), pp 1–6
24.
Zurück zum Zitat Oyama A, Shimoyama K, Fujii K (2007) New constraint-handling method for multi-objective and multi-constraint evolutionary optimization. Trans Jpn Soc Aeron Space 50(167):56–62 Oyama A, Shimoyama K, Fujii K (2007) New constraint-handling method for multi-objective and multi-constraint evolutionary optimization. Trans Jpn Soc Aeron Space 50(167):56–62
25.
Zurück zum Zitat Yang Z, Cai X, Fan Z. (2014) Epsilon constrained method for constrained multiobjective optimization problems: some preliminary results. In: Proceedings of the companion publication of the 2014 annual conference on genetic and evolutionary computation, pp 1181–1186 Yang Z, Cai X, Fan Z. (2014) Epsilon constrained method for constrained multiobjective optimization problems: some preliminary results. In: Proceedings of the companion publication of the 2014 annual conference on genetic and evolutionary computation, pp 1181–1186
26.
Zurück zum Zitat Fan Z, Li WJ, Cai XY, Huang H, Fang Y, You YG, Mo JJ, Wei CM, Goodman E (2019) An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions. Soft Comput 23(23):12491–12510 Fan Z, Li WJ, Cai XY, Huang H, Fang Y, You YG, Mo JJ, Wei CM, Goodman E (2019) An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions. Soft Comput 23(23):12491–12510
27.
Zurück zum Zitat Geng HT, Zhang M, Huang LF, Wang XF (2006) Infeasible elitists and stochastic ranking selection in constrained evolutionary multi-objective optimization. In: International conference on simulated evolution and learning. Springer, Berlin, pp 336–344 Geng HT, Zhang M, Huang LF, Wang XF (2006) Infeasible elitists and stochastic ranking selection in constrained evolutionary multi-objective optimization. In: International conference on simulated evolution and learning. Springer, Berlin, pp 336–344
28.
Zurück zum Zitat Zhang M, Luo W, Wang XF (2008) Differential evolution with dynamic stochastic selection for constrained optimization. Inf Sci 178(15):3043–3074 Zhang M, Luo W, Wang XF (2008) Differential evolution with dynamic stochastic selection for constrained optimization. Inf Sci 178(15):3043–3074
29.
Zurück zum Zitat Long Q (2014) A constraint handling technique for constrained multi-objective genetic algorithm. Swarm Evol Comput 15:66–79 Long Q (2014) A constraint handling technique for constrained multi-objective genetic algorithm. Swarm Evol Comput 15:66–79
30.
Zurück zum Zitat Zhou Y, Zhu M, Wang J, Zhang Z, Xiang Y, Zhang J (2020) Tri-goal evolution framework for constrained many-objective optimization. IEEE Trans Syst Man Cybern 50(8):3086–3099 Zhou Y, Zhu M, Wang J, Zhang Z, Xiang Y, Zhang J (2020) Tri-goal evolution framework for constrained many-objective optimization. IEEE Trans Syst Man Cybern 50(8):3086–3099
31.
Zurück zum Zitat Deb K, Datta R (2010) A fast and accurate solution of constrained optimization problems using a hybrid bi-objective and penalty function approach. In: IEEE congress on evolutionary computation, pp 1–8 Deb K, Datta R (2010) A fast and accurate solution of constrained optimization problems using a hybrid bi-objective and penalty function approach. In: IEEE congress on evolutionary computation, pp 1–8
32.
Zurück zum Zitat Wang Y, Cai Z, Zhou Y, Zeng W (2008) An adaptive tradeoff model for constrained evolutionary optimization. IEEE Trans Evol Comput 12(1):80–92 Wang Y, Cai Z, Zhou Y, Zeng W (2008) An adaptive tradeoff model for constrained evolutionary optimization. IEEE Trans Evol Comput 12(1):80–92
33.
Zurück zum Zitat Tian Y, Zhang T, Xiao JH, Zhang XY, Jin YC (2021) A coevolutionary framework for constrained multiobjective optimization problems. IEEE Trans Evol Comput 25(1):102–116 Tian Y, Zhang T, Xiao JH, Zhang XY, Jin YC (2021) A coevolutionary framework for constrained multiobjective optimization problems. IEEE Trans Evol Comput 25(1):102–116
34.
Zurück zum Zitat Mezura-Montes E, Coello C (2005) A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Trans Evol Comput 9(1):1–17MATH Mezura-Montes E, Coello C (2005) A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Trans Evol Comput 9(1):1–17MATH
35.
Zurück zum Zitat Ray T, Singh HK, Isaacs A, Smith W (2009) Infeasibility driven evolutionary algorithm for constrained optimization. Constraint-handling in evolutionary optimization. Springer, Berlin, pp 145–165 Ray T, Singh HK, Isaacs A, Smith W (2009) Infeasibility driven evolutionary algorithm for constrained optimization. Constraint-handling in evolutionary optimization. Springer, Berlin, pp 145–165
36.
Zurück zum Zitat Isaacs A, Ray T, Smith W, IEEE (2008) Blessings of maintaining infeasible solutions for constrained multi-objective optimization problems. In: IEEE congress on evolutionary computation, pp 2780–2787 Isaacs A, Ray T, Smith W, IEEE (2008) Blessings of maintaining infeasible solutions for constrained multi-objective optimization problems. In: IEEE congress on evolutionary computation, pp 2780–2787
37.
Zurück zum Zitat Sorkhabi AE, Amiri MD, Khanteymoori AR (2017) Duality evolution: an efficient approach to constraint handling in multi-objective particle swarm optimization. Soft Comput 21(24):7251–7267 Sorkhabi AE, Amiri MD, Khanteymoori AR (2017) Duality evolution: an efficient approach to constraint handling in multi-objective particle swarm optimization. Soft Comput 21(24):7251–7267
38.
Zurück zum Zitat Wang JH, Liang GX, Zhang J (2019) Cooperative differential evolution framework for constrained multiobjective optimization. IEEE Trans Cybern 49(6):2060–2072 Wang JH, Liang GX, Zhang J (2019) Cooperative differential evolution framework for constrained multiobjective optimization. IEEE Trans Cybern 49(6):2060–2072
39.
Zurück zum Zitat Tian Y, Zhang Y, Su Y, Zhang X, Jin Y (2020) Balancing objective optimization and constraint satisfaction in constrained evolutionary multi-objective optimization. IEEE Trans Cybern Tian Y, Zhang Y, Su Y, Zhang X, Jin Y (2020) Balancing objective optimization and constraint satisfaction in constrained evolutionary multi-objective optimization. IEEE Trans Cybern
40.
Zurück zum Zitat Peng CD, Liu HL, Gu FQ (2017) An evolutionary algorithm with directed weights for constrained multi-objective optimization. Appl Soft Comput 60:613–622 Peng CD, Liu HL, Gu FQ (2017) An evolutionary algorithm with directed weights for constrained multi-objective optimization. Appl Soft Comput 60:613–622
41.
Zurück zum Zitat Sharma D, Soren P (2013) Infeasibility driven approach for bi-objective evolutionary optimization. In: 2013 IEEE congress on evolutionary computation, pp 868–875 Sharma D, Soren P (2013) Infeasibility driven approach for bi-objective evolutionary optimization. In: 2013 IEEE congress on evolutionary computation, pp 868–875
42.
Zurück zum Zitat Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength Pareto evolutionary algorithm for multiobjective optimization. In: Proceedings of 5th conference evolutionary methods design, optimisation and control appl industrial probability, pp 95–100 Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength Pareto evolutionary algorithm for multiobjective optimization. In: Proceedings of 5th conference evolutionary methods design, optimisation and control appl industrial probability, pp 95–100
43.
Zurück zum Zitat Rahi KH, Singh HK, Ray T (2021) Partial evaluation strategies for expensive evolutionary constrained optimization. IEEE Trans Evol Comput 25(6):1103–1117 Rahi KH, Singh HK, Ray T (2021) Partial evaluation strategies for expensive evolutionary constrained optimization. IEEE Trans Evol Comput 25(6):1103–1117
44.
Zurück zum Zitat Liu HL, Gu FQ, Zhang QF (2014) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455 Liu HL, Gu FQ, Zhang QF (2014) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455
45.
Zurück zum Zitat Cornell JA (2011) Experiments with mixtures: designs, models, and the analysis of mixture data. Wiley, Hoboken Cornell JA (2011) Experiments with mixtures: designs, models, and the analysis of mixture data. Wiley, Hoboken
46.
Zurück zum Zitat Ma ZW, Wang Y (2019) Evolutionary constrained multiobjective optimization: test suite construction and performance comparisons. IEEE Trans Evol Comput 23(6):972–986MathSciNet Ma ZW, Wang Y (2019) Evolutionary constrained multiobjective optimization: test suite construction and performance comparisons. IEEE Trans Evol Comput 23(6):972–986MathSciNet
47.
Zurück zum Zitat Deb K, Agrawal RB (1995) Simulated binary crossover for continuous search space. Complex Syst 9(2):115–148MathSciNetMATH Deb K, Agrawal RB (1995) Simulated binary crossover for continuous search space. Complex Syst 9(2):115–148MathSciNetMATH
48.
Zurück zum Zitat Deb K, Goyal M (1996) A combined genetic adaptive search (GeneAS) for engineering design. Comput Sci Inform 26:30–45 Deb K, Goyal M (1996) A combined genetic adaptive search (GeneAS) for engineering design. Comput Sci Inform 26:30–45
49.
Zurück zum Zitat Tian Y, Cheng R, Zhang X, Jin Y (2017) PlatEMO: a MATLAB platform for evolutionary multi-objective optimization. IEEE Comput Intell Mach 12(4):73–87 Tian Y, Cheng R, Zhang X, Jin Y (2017) PlatEMO: a MATLAB platform for evolutionary multi-objective optimization. IEEE Comput Intell Mach 12(4):73–87
50.
Zurück zum Zitat Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601 Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601
51.
Zurück zum Zitat Price K, Storn RM, Lampinen JA (2006) Differential evolution: a practical approach to global optimization. Springer, BerlinMATH Price K, Storn RM, Lampinen JA (2006) Differential evolution: a practical approach to global optimization. Springer, BerlinMATH
52.
Zurück zum Zitat Bosman P, Thierens D (2003) The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE Trans Evol Comput 7(2):174–188 Bosman P, Thierens D (2003) The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE Trans Evol Comput 7(2):174–188
53.
Zurück zum Zitat Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the Strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271 Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the Strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271
54.
Zurück zum Zitat Dong HC, Wang P, Song BW, Zhang YJ, An XY (2020) Kriging-assisted discrete global optimization (KDGO) for black-box problems with costly objective and constraints. Appl Soft Comput 94 Dong HC, Wang P, Song BW, Zhang YJ, An XY (2020) Kriging-assisted discrete global optimization (KDGO) for black-box problems with costly objective and constraints. Appl Soft Comput 94
55.
Zurück zum Zitat Hicks RM, Henne PA (1978) Wing design by numerical optimization. J Aircr 15(7):407–412 Hicks RM, Henne PA (1978) Wing design by numerical optimization. J Aircr 15(7):407–412
56.
Zurück zum Zitat Dong H, Li J, Wang P, Song B, Yu X (2021) Surrogate-guided multi-objective optimization (SGMOO) using an efficient online sampling strategy. Knowl Based Syst 220 Dong H, Li J, Wang P, Song B, Yu X (2021) Surrogate-guided multi-objective optimization (SGMOO) using an efficient online sampling strategy. Knowl Based Syst 220
Metadaten
Titel
A constrained multi-objective optimization algorithm using an efficient global diversity strategy
verfasst von
Wenyi Long
Huachao Dong
Peng Wang
Yan Huang
Jinglu Li
Xubo Yang
Chongbo Fu
Publikationsdatum
14.09.2022
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems / Ausgabe 2/2023
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-022-00851-1

Weitere Artikel der Ausgabe 2/2023

Complex & Intelligent Systems 2/2023 Zur Ausgabe

Premium Partner