Skip to main content
Erschienen in: Complex & Intelligent Systems 5/2023

Open Access 24.02.2023 | Original Article

Evolutionary multiobjective optimization via efficient sampling-based offspring generation

verfasst von: Cheng He, Lianghao Li, Ran Cheng, Yaochu Jin

Erschienen in: Complex & Intelligent Systems | Ausgabe 5/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

With the rising number of large-scale multiobjective optimization problems from academia and industries, some evolutionary algorithms (EAs) with different decision variable handling strategies have been proposed in recent years. They mainly emphasize the balance between convergence enhancement and diversity maintenance for multiobjective optimization but ignore the local search tailored for large-scale optimization. Consequently, most existing EAs can hardly obtain the global or local optima. To address this issue, we propose an efficient sampling-based offspring generation method for large-scale multiobjective optimization, where convergence enhancement and diversity maintenance, together with ad hoc local search, are considered. First, the decision variables are dynamically classified into two types for solving large-scale decision space in a divide-and-conquer manner. Then, a convergence-related sampling strategy is designed to handle those decision variables related to convergence enhancement. Two additional sampling strategies are proposed for diversity maintenance and local search, respectively. Experimental results on problems with up to 5000 decision variables have indicated the effectiveness of the algorithm in large-scale multiobjective optimization.
Hinweise

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s40747-023-00990-z.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Multiobjective optimization problems (MOPs) are commonly seen in real-world applications, and even many conventional single-objective optimization problems can also be transferred/optimized into/as MOPs [1]. Specifically, multiple, often conflicting, objectives exist in an MOP, and the problem can be mathematically formulated as
$$\begin{aligned}{} & {} \text {minimize}\, \textbf{f}(\mathbf {\textbf{x}})=\left( f_1(\textbf{x}),f_2(\textbf{x}),\dots ,f_m(\textbf{x})\right) \nonumber \\{} & {} \text {subject to}\, \textbf{x}\in \mathbb {R}^d. \end{aligned}$$
(1)
Notably, m and d are the numbers of objectives and decision variables, respectively. The optima of MOPs is a set of non-dominated solutions trading off between different objectives [2]. To be more specific, all the optima in the decision space form the Pareto optimal set (PS), and the projection of the PS in the objective space is the Pareto optimal front (PF). Thanks to the population-based property of evolutionary algorithms (EAs), many researchers have turned to evolutionary multiobjective optimization for obtaining a solution set in a single run [3].
Multiobjective EAs (MOEAs) for solving MOPs with different properties have been designed in recent decades. Most MOEAs have emphasized simplicity since the mid-1980s [4], such as non-dominated sorting based genetic algorithms (multiobjective GA (MOGA [5]) and NSGA [6]). These MOEAs effectively solved simple MOPs with several decision variables (usually less than ten) and two–three objectives. Two decades ago, the strength Pareto EA (SPEA [7]), along with the publication of the elitist NSGA (NSGA-II [8]), the decomposition-based MOEAs (MOEA/D [9]) and the indicator-based EAs (IBEA [10]), emphasized on efficiency. These MOEAs have shown promising performance in obtaining evenly distributed solution sets on problems with less than 100 decision variables and two–three objectives [11].
As the increase in the number of objectives, MOPs with more than three objectives (known as many-objective optimization problems or MaOPs) cause the loss of selection pressure for conventional MOEAs [12]. Thus, some environmental selection strategies were designed to distinguish the qualities of candidate solutions in the high-dimensional objective space [13]. For instances, some many-objective EAs have been proposed by modifying the dominance relationship [14], introducing additional reference information [15], or using effective performance indicators [16], e.g., knee point-based EA (KnEA [17]), reference vector guided EA (RVEA [18]), and fast hypervolume-based EA (HypE [19]). MOEAs above have shown promising performance in solving problems with up to 15 objectives and less than 100 decision variables [20].
In recent years, MOPs with more than 100 decision variables (known as large-scale MOP or LSMOP) [21] appeared more frequently in complex industries, e.g., community detection in complex social networks [22] and ratio error estimation of voltage transformer (TREE) [23]. Due to the rapid growth in the number of decision variables, the curse of dimensionality [24] brings more challenges for MOEAs to solve LSMOPs than to solve MaOPs [25]. Some MOEAs tailored for large-scale multiobjective optimization are proposed to handle this challenge. Generally, they can be roughly classified into four categories [26]: (1) the cooperative co-evolutionary (CC)-based MOEAs group the decision variable without specific analysis and solve LSMOPs in a divide-and-conquer manner [27]; (2) the decision variable analysis-based MOEAs group the decision variables via linking the decision variables to convergence or diversity properties [28, 29], e.g., decision variable analysis-based MOEA (MOEA/DVA [30, 31]), EA with cluster-based grouping (LMEA [32]), and subpopulations-based approach (S\(^3\)-CMA-ES [33]); (3) the problem reformulation-based approaches form the third category, including the weighted optimization-based framework (WOF [34]) and the large-scale multiobjective optimization framework (LSMOF [35]); and (4) MOEAs of the last category use efficient offspring generation strategies for large-scale multiobjective optimization, e.g., competitive swarm optimizer-based EA (LMOCSO [36]) and adaptive direction-guided EA (DGEA [37, 38]).
The aforementioned large-scale MOEAs have shown promising performance in solving LSMOPs, but they can be further improved by addressing their inherent drawbacks. For instance, the CC-based approaches may incorrectly group the decision variables; the decision variable analysis-based approaches could waste a huge number of function evaluations (FEs); those problem reformulation based approaches could ignore the diversity maintenance to some extent; the generation of candidate offspring solutions could ignore the local search in those efficient offspring generation-based algorithms. Thus, we propose a sampling-based MOEA by taking advantage of the ideas of decision variable analysis, problem reformulation, and efficient offspring generation. The main contributions of this study are summarized as follows:
1.
An efficient variable classification technique is proposed to guide the search in the decision space. Compared with existing exact decision variable analysis methods, the proposed classification approach is applicable to problems with different properties using significantly low computational costs.
 
2.
Local search is emphasized in the proposed SLSEA, which aims to enhance the capability of MOEAs in searching local/global optima, enabling the modification of conventional MOEAs for efficient large-scale multiobjective optimization.
 
The rest of this paper is organized as follows. The details of the proposed sampling based large-scale EA, termed SLSEA, are presented in section “Method”. Next, the settings of different algorithms, tested problems, and the adopted performance indicators are given in “Experimental settings”. Empirical results achieved by SLSEA in comparison with some state-of-the-art large-scale MOEAs are presented in “Comparative studies”, and conclusions are drawn in “Conclusion”.

Method

In this part, we first demonstrate the main idea of this study, followed by the principles of the proposed variable division approach. Then, the sampling strategies guided by the variable division are elaborated.

Main idea

Inspired by the decision variable analysis-based divide-and-conquer strategy and the efficient offspring generation approaches, we propose an optimization-based variable classification approach to handle LSMOPs. Instead of categorizing decision variables into many groups, all the decision variables are divided into two groups in the proposed method. Generally, the first group consists of decision variables related to convergence and the second one related to diversity. All the convergence-related/diversity-related decision variables are optimized independently without considering the pairwise variable interactions. Accordingly, three different sampling strategies, following the idea of efficient offspring generation, for convergence enhancement, diversity maintenance, and local search are designed.1 An illustrative example of the proposed offspring generation strategies on an MOP with five decision variables is presented in Fig. 1, which visually shows the principles of three involved sampling strategies in SLSEA. Assume binary vector \(\textbf{b}=(0,1,0,1,1)\) is a vector for indicating the division result of an MOP with five decision variables. The vector indicates that decision variables \(x_2,x_4,x_5\) are convergence-related decision variables, while \(x_1,x_3\) are diversity-related ones.
1.
In Fig. 1a, the convergence-related sampling perturbs variables \(x_2\), \(x_4\), and \(x_5\) using a norm Gaussian distribution \(\mathcal {N}(0,1)\).
 
2.
The diversity-related sampling perturbs the rest variables (i.e., \(x_1\) and \(x_3\), which may be related to diversity) and uses an uniform distribution \(\mathcal {U}(0,1)\), as displayed in Fig. 1b. An uniform distribution is used for sampling the diversity-related decision variable(s) is expected to enhance the exploration capability of the population.
 
3.
As for the local search-related sampling, we use a Gaussian distribution to restrict the sampling density around the current population, aiming to encourage the exploitation of the population, as shown in Fig. 1c.
 
With these ad hoc sampling strategies, we are motivated to solve LSMOPs via efficient sampling under the guidance of variable classification. Note that the norm Gaussian distribution is used in two designed sampling strategies due to its popularity with symmetry, coverage, and a predefined center. The symmetry ensures unbiased sampling as we have no prior knowledge of the problem, and the coverage enables the sampling of all possible solutions. Moreover, the predefined center controls the sampling, which is expected to guide the sampling towards the Pareto optimal set. Other distributions with similar properties can also be used for perturbing the three sampling strategies. If the prior knowledge of the problem is given, the multivariate Gaussian distribution can be used for addressing this issue, which could be computationally expensive.

Schema of SLSEA

Algorithm 1 presents the schema of the proposed SLSEA. It mainly consists of five components, i.e., convergence sampling, variable classification, diversity sampling, local sampling, and environmental selection. First, a candidate solution set P of size n and a binary vector set B of size \(n_b\) are randomly initialized (Steps 1–2 in Algorithm 1). Next, B is used to sample a set of convergence-related solutions \(P_c\) (Steps 5\(\sim \)6 in Algorithm 1). Notably, B is updated using the proposed variable classification approach, during which the newly generated convergence-related solutions are merged into population \(P_c\). Then, the diversity-related sampling strategy (Step 7 in Algorithm 1) is adopted to sample a set of diversity-related candidate solutions \(P_d\) with the guidance of the updated B. Afterwards, a local search-related sampling is designed to sample local solutions \(P_l\) around the current population P. Finally, the combination of all the sampled candidate solutions is updated using environmental selection. Notably, different environmental selection strategies in conventional MOEAs can be used in SLSEA, and here, we use the environmental selection in NSGA-II due to its efficiency and simplicity. The procedures above are repeated until the maximum number of iterations is met.
The detailed procedures of the convergence-related sampling are given in Algorithm 2 In B, the ith binary vector \(\textbf{b}_i\) denotes an approximation of the accurate variable classifications. It is used to sample some convergence-related solutions via perturbing those convergence-related variables only. A total number of \(n_s\times n_b\) candidate solutions are sampled during the sampling, where \(n_s\) candidate solutions are sampled for each binary vector.
In addition to the convergence-related sampling in the main loop of SLSEA, this sampling is also conducted for updating the binary vectors (Step 3 in Algorithm 3). Thus, a total number of \(2n_s\times n_b\) convergence-related solutions are sampled in one generation of SLSEA.

Variable classification

In the proposed SLSEA, variable classification is used to update the binary vectors for better approximating the accurate variable classification, refer to Algorithm 3. An illustrative example of an MOP with five decision variables is given in Fig. 2 to show the detailed procedures. During the variable classification, the binary vector set B of size \(n_b\) is regarded as the parent population, and the quality of each binary vector is assessed based on the quality of the sampled solutions using Algorithm 4. The mating selection strategy in NSGA-II is applied to select the mating pool for generating offspring \(B'\). Notably, the single-point crossover and bit-wise mutation in the canonical GA are used for the generation of offspring binary vectors, and readers can refer to literature [8, 39] for more details. Next, a set of convergence-related solutions are sampled for each offspring binary vector using Algorithm 2, followed by the quality assessment of each offspring binary vector. Finally, \(n_d\) binary vectors are selected from \(B\cup B'\) using the environmental selection strategy in NSGA-II. Besides, the convergence-related solution set is also updated.
As a crucial part of the variable classification, the quality assessment determines the accuracy of variable classification results for guiding the sampling of convergence- and diversity-related solutions. Its detailed procedures are displayed in Algorithm 4 using two criteria:
(1)
Instead of using the average Euclidean distance to measure the convergence degree of a candidate solution set, the grid-based distance given in (2) is used. It could eliminate the effect of neighbor points [40] to tolerate the inaccuracy in quality assessment.
$$\begin{aligned}{} & {} d_i= \lfloor N_d*\frac{\left\Vert {\textbf{f}(\textbf{x}_i) - \textbf{l}}\right\Vert -\left\Vert {\textbf{l}}\right\Vert }{\left\Vert {\textbf{u}}\right\Vert -\left\Vert {\textbf{l}}\right\Vert }\rfloor ,\end{aligned}$$
(2)
$$\begin{aligned}{} & {} q_1 = \sum _{i=1}^{N_s}{\text {sign}(\min {(\textbf{f}(\textbf{x}_i) - \textbf{l}))\times d_i}}, \end{aligned}$$
(3)
where \(\textbf{l}\) and \(\textbf{u}\) are the lower and upper boundaries of the objective vectors, sign(\(*\)) is a sign function, \(\lfloor *\rfloor \) is a floor function, \(\left\Vert {*}\right\Vert \) obtains the length of a vector, and \(N_d\) is the number of solutions in the current solution set. If the ideal point dominates any solution, the grid-based distance will be positive; otherwise, its grid-based distance will be negative. Thus, this solution will reduce the average distance of the entire solution set to encourage the generation of convergence-related binary vectors.
 
(2)
The use of \(q_2\) is inspired by the decision variable analysis method in LMEA [32], and a smaller \(q_2\) value is likely to indicate the higher correction to diversity, which can be calculated using the non-dominated sorting method [2].
 
The design of \(q_1\) and \(q_2\) is intuitively designed, aiming to demonstrate the effectiveness of the reformulation process in a relatively straightforward manner. The first objective reflects the average convergence degree of the solution set, which is a simplified version of the convergence degree in LMEA. The second objective is expected to minimize the number of convergence-related decision variables, aiming to reduce the dimensionality of the sub-problem for accelerating large-scale multiobjective optimization. Nonetheless, the framework is compatible with other tailored functions of similar purpose.
During the variable classification, a set of binary vectors are optimized simultaneously. We take advantage of the binary vectors to sample some candidate solutions for diversity maintenance, as presented in Algorithm 5. We first obtain the best-converged solution \(\textbf{p}\) \(\in \) \(P\) as the baseline to ensure the convergence of the candidate solutions to be sampled (Step 1 in Algorithm 5). ||f(p)|| denotes the Euclidean distance of solution \(\textbf{p}\) in the objective space, which reflects the convergence degree of solution \(\textbf{p}\). By selecting the solution with the best convergence degree, Step 1 of Algorithm 5 is expected to guide the sampling of diversity-related solutions with good convergence. Then, we sample n candidate solutions for each binary vector by perturbing those diversity-related decision variables (i.e., elements with zero value in a binary vector). Specifically, a uniform distribution \(\mathcal {U}(0,1)\) is used to generate a random number k for perturbing the decision vector (Step 3 in Algorithm 5), encouraging the sampling of diversity-related candidate solutions. Finally, all the sampled candidate solutions are merged as population \(P_d\).
While the variable classification and diversity-related sampling aim to deal with convergence enhancement and diversity maintenance respectively, the local search-related sampling is expected to conduct local search around existing solutions. The detailed procedures of the local search-related sampling are given in Algorithm 6. Specifically, we conduct a local search around \(n_s\) solutions randomly selected from population P. Notably, this selection can also choose some dominated solutions to encourage diversity maintenance since they could be important in global optimization [40, 41]. Then, a value randomly selected from a norm Gaussian distribution is used to perturb each chosen solution. Mathematically, for a candidate solution \(\textbf{x}\), the ith variable of the perturbed solutions satisfies Gaussian distribution \(\mathcal {N}(x_i,1)\) (refer to Fig. 1c). Finally, a total number of \(n_s\times n\) candidate solutions are merged as population \(P_l\) during the local search-related sampling.

Experimental settings

We first briefly introduce the adopted performance indicators, followed by the details of the involved parameter settings. Each algorithm is run 20 times on each test problem independently for the Wilcoxon rank-sum test [42] at a significance level of 0.05. We use symbols ‘−’, ‘\(+\)’, and ‘\(\approx \)’ to indicate SLSEA is statistically surpassed by, significantly worse than, and statistically tied with the compared algorithm. All the compared algorithms are implemented in PlatEMO v2.6 [43] on a PC with two 2.1 GHz Intel(R) Xeon(R) Gold 6130 CPUs (32 CPU Cores) and 128 GB RAM on the Windows 10 64-bit operation system.

Performance indicator

To assess the performance of the compared algorithms on large-scale multiobjective optimization, two widely used indicators, namely the IGD indicator [44] and the hypervolume (HV) indicator [45], are used. Both indicators can assess the convergence degree and distribution uniformity of a solution set in multiobjective optimization. Nevertheless, these two performance indicators are different due to the different requirements in reference points.
In the IGD indicator, a set of uniformly distributed points located precisely on the PF are required to give an accurate assessment of the quality of the solution set. By contrast, only a single point close to the nadir point of the PF is needed in HV calculation [46]. Generally, IGD calculation requires prior knowledge about the PF, which is suitable for benchmark problems with known PFs. The IGD\(+\) indicator can also be used for better assessment of the quality, which is weakly Pareto compliant, whereas IGD is Pareto incompliant [47]. Thus, it is suitable for mathematically formulated problems, e.g., DTLZ problems [48], WFG problems [49], and LSMOP problems [50]. However, for a real-world MOP such as a TREE problem [23], having a set of uniformly distributed points on its PF is impractical. Thus, we can adapt the HV indicator to assess the quality of a solution set using an approximation of the real nadir point.

Test problems

In the following experiments, some test instances are selected from three benchmark test suites, i.e., DTLZ [48], LSMOP [50], and TREE [23]. In all these test instances, the number of objectives m is set to two, and the detailed settings are given as follows.
(1)
Three representative DTLZ problems with fully separable variable interactions are tested for ablation studies, i.e., DTLZ1, DTLZ4, and DTLZ7 [48]. DTLZ1 is a multimodal MOP, where conventional MOEAs can hardly obtain converged solutions. DTLZ4 may hinder MOEAs from achieving a well-distributed set of solutions, and DTLZ7 will test the ability of an algorithm to maintain subpopulation in different Pareto optimal regions. Besides, the number of decision variables is set to 5000, and the maximum number of FEs is set to 1\(\times 10^6\).
 
(2)
Nine LSMOP problems are tested in this study, involving fully separable (LSMOP1, LSMOP5, and LSMOP9), partially separable (LSMOP2 and LSMOP6), or mixed (LSMOP3, LSMOP4, LSMOP7, and LSMOP8) variable interactions [50]. Generally, those LSMOP problems are tailored for examining the performance of EAs in large-scale multiobjective optimization, and we set the maximum number of FEs to 200\(\times \) \(d\) for test instances with d decision variables (d ranges in \(\{1000,2000, 5000\}\)).
 

Parameter settings

We adopt the recommended parameter settings for the compared algorithms that have achieved the best performance reported in the literature for fair comparisons. Moreover, to give an insight into the performance of SLSEA in large-scale multiobjective optimization, different variants of SLSEAs are designed as well.

Variants of SLSEA

Several ablation studies are conducted to investigate the effect of different sampling strategies in SLSEA on its performance in large-scale multiobjective optimization. Five variants of SLSEA using different combinations of sampling strategies are tested on three DTLZ problems [48], three LSMOP problems [50], and two TREE problems [23]. The first variant without diversity sampling and local search-related sampling is called SLSEA\(\setminus \)DL, the second one with only local search-related sampling is called SLSEA\(\setminus \)CD, the third one without diversity sampling is called SLSEA\(\setminus \)D, the forth one without local search-related sampling is called SLSEA\(\setminus \)L, and the last one with all the functions is named SLSEA. In these five variants, the interaction vector set size \(n_b\) is set to 10 and the sampling size \(n_s\) is set to five according to our empirical results.

SOTA large-scale MOEAs

The proposed SLSEA is compared with four representative large-scale MOEAs, including LSMOF [35], MOEA/DVA [30], DGEA [37], and LMOCSO [36]. In LSMOF, NSGA-II is embedded as the optimizer for the second stage using 50\(\%\) of FEs for fair comparisons, and the population size n is set to 100 for all the compared algorithms. To be more specific, the number of reference solutions r is set to ten. The population size \(n_s\) and the scale factor \(F_m\) of the single-objective differential evolution algorithm are set to 30 and 0.8, respectively. For MOEA/DVA, the number of sampling solutions in control variable analysis NIC is set to 20, the maximum number of tries required to judge the interaction NIA is set to six, and the MOEA/D based on differential evolution (MOEA/D-DE) [51] is used for uniformity optimization. The number of reference solutions r is set to ten in DGEA, and there is no specific parameter in LMOCSO. Regarding SLSEA, the population size for variable classification \(n_b\) is set to ten and the sampling size \(n_s\) is set to five.

Ablation studies

In our proposed SLSEA, two additional sampling strategies, i.e., diversity-related sampling and local search-related sampling, are involved. It is essential to investigate the contribution of each strategy to the performance of SLSEA. We have conducted a series of ablation studies by comparing the performance of different variants of SLSEA in solving LSMOPs. Empirically, we compare the original algorithm (termed SLSEA) with its four variants (refer to “Experimental settings”.C) on eight test instances selected from three benchmark test suites, including three DTLZ problems from conventional artificial MOPs, three LSMOP problems from artificial LSMOPs, and two TREE problems extracted from real-world applications.
The convergence profiles of the five compared algorithms on DTLZ1, DTLZ4, and DTLZ7 with 5000 decision variables are displayed in Fig. 3 in terms of the IGD values. To be more specific, the mean IGD values are noted by different markers, and the corresponding lower and upper boundaries are given by the color bars. SLSEA\(\setminus \)CD has achieved the smallest mean IGD results on the DTLZ1 problem; SLSEA\(\setminus \)D has achieved the best on DTLZ4 problem; SLSEA\(\setminus \)DL performed the best on DTLZ7 problems. All in all, SLSEA is capable of achieving the average performance of these three test problems.
The convergence profiles on three representative LSMOP problems with 5000 decision variables are given in Fig. 4 in terms of the IGD values. Unlike the comparison results on DTLZ problems, SLSEA\(\setminus \)CD has performed the best on all three cases, followed by SLSEA\(\setminus \)DL; SLSEA\(\setminus \)D, SLSEA\(\setminus \)L, and SLSEA performed similarly. For LSMOP problems, the local search-based sampling is capable of handling the LSMOPs with complex variable interactions.
Since TREE problems are constructed from real-world applications whose true PFs are unknown, we use HV values to show the convergence profiles on TREE1 with 15,000 decision variables and TREE4 with 30,000 decision variables (as shown in Fig. 5). Different from the results on DTLZ and LSMOP problems, SLSEA has performed the best while SLSEA\(\setminus \)DL and SLSEA\(\setminus \)L performed the worst.
In summary, the local search-related sampling has contributed significantly to the performance of SLSEA. Due to the specific property of each tested problem, different combinations of sampling strategies have led to different performances. For instance, in the three LSMOP problems, SLSEA without diversity and local search sampling strategies performed the second best, indicating that LSMOPs are not difficult for diversity maintenance. DTLZ1 is a multimodal MOP, where diversity maintenance is essential, and thus SLSEA\(\setminus \)DL has performed the worst due to the loss of diversity maintenance.
Different variants of SLSEAs have performed differently on artificial test instances, where local search-related sampling has contributed the most to their performance. Nevertheless, SLSEA has performed the best on two real-world test instances. It can be summarized that local search-related sampling is effective in large-scale optimization. Besides, existing artificial benchmark problems may be too regular, involving biased preferences and missing real-world properties. Thus, we still use the original version of SLSEA in the following experiments due to its comprehensive consideration of problem properties and robustness.

Comparative studies

The general performance of SLSEA is compared to four SOTA large-scale MOEAs on LSMOP and TREE test suites. The computational complexity of the proposed SLSEA and the computation time for the compared algorithms are also presented to indicate its efficiency.
Three different experimental results indicate the overall performance, dynamic behavior, and final results of all the compared algorithms, respectively.
Table 1
The results of CCGDE3, LCSA, IM-MOEA, LSMOF, MOEA/DVA, DGEA, LMOCSO and SLSEA on 54 test instances
Problem
D
CCGDE3
LCSA
IM-MOEA
LSMOF
MOEA/DVA
DGEA
LMOCSO
SLSEA
LSMOP1
100
3.62e+0(5.58e-1)−
6.31e-1(5.38e-2)−
6.21e-1(5.30e-2)−
2.89e-1(5.91e-2)\(+\)
4.39e+0(3.68e-1)−
3.21e-1(6.12e-2)\(+\)
3.32e-1(2.84e-2)\(+\)
4.92e-1(1.08e-1)
200
4.22e+0(6.97e-1)−
6.38e-1(4.44e-2)−
8.85e-1(1.02e-1)−
3.39e-1(4.81e-2)\(+\)
6.28e+0(5.34e-1)−
3.20e-1(3.61e-2)\(+\)
4.31e-1(6.54e-2)\(\approx \)
4.73e-1(1.23e-1)
500
4.88e+0(3.47e-1)−
5.70e-1(1.03e-1)−
1.10e+0(1.19e-1)−
3.85e-1(5.81e-2)\(\approx \)
7.37e+0(4.25e-1)−
3.25e-1(3.22e-2)\(+\)
5.81e-1(4.95e-2)−
4.43e-1(1.38e-1)
1000
5.20e+0(2.18e-1)−
5.25e-1(1.22e-1)−
1.33e+0(1.52e-1)−
3.80e-1(6.12e-2)\(\approx \)
7.82e+0(3.08e-1)−
3.84e-1(1.36e-1)\(\approx \)
8.61e-1(1.24e-1)−
4.28e-1(1.30e-1)
2000
5.70e+0(3.00e-1)−
3.78e-1(1.33e-1)\(\approx \)
1.46e+0(1.15e-1)−
3.53e-1(4.12e-2)\(\approx \)
8.15e+0(1.53e-1)−
3.36e-1(1.41e-2)\(\approx \)
1.09e+0(1.09e-1)−
3.79e-1(1.11e-1)
5000
6.02e+0(2.22e-1)−
3.11e-1(6.63e-2)\(+\)
1.55e+0(8.11e-2)−
4.20e-1(8.55e-2)−
8.33e+0(1.41e-1)−
3.34e-1(1.03e-2)\(\approx \)
1.31e+0(1.25e-1)−
3.91e-1(1.04e-1)
LSMOP2
100
2.30e-1(8.40e-3)−
1.39e-1(4.13e-3)−
1.43e-1(6.19e-3)−
4.83e-2(2.92e-3)\(+\)
2.46e-1(2.82e-3)−
7.14e-2(3.03e-2)−
1.58e-1(4.62e-3)−
5.10e-2(4.04e-3)
200
1.47e-1(2.47e-3)−
7.81e-2(4.53e-4)−
1.05e-1(2.62e-3)−
3.03e-2(1.71e-3)−
1.56e-1(1.02e-3)−
3.28e-2(2.18e-3)−
9.69e-2(1.83e-3)−
2.85e-2(7.79e-4)
500
7.24e-2(1.29e-3)−
3.35e-2(3.29e-4)−
5.59e-2(1.06e-3)−
1.70e-2(6.24e-4)−
7.33e-2(2.83e-4)−
1.47e-2(9.54e-4)\(\approx \)
4.40e-2(8.50e-4)−
1.46e-2(3.32e-4)
1000
3.96e-2(5.04e-4)−
1.91e-2(2.93e-4)−
3.38e-2(4.69e-4)−
1.17e-2(3.79e-4)−
4.02e-2(2.84e-4)−
8.35e-3(3.42e-4)\(+\)
2.41e-2(4.57e-4)−
1.00e-2(5.80e-4)
2000
2.17e-2(2.70e-4)−
1.25e-2(2.44e-4)−
2.07e-2(3.51e-4)−
8.76e-3(2.96e-4)−
2.27e-2(4.03e-4)−
5.58e-3(2.34e-4)\(+\)
1.34e-2(2.99e-4)−
7.13e-3(3.40e-4)
5000
1.09e-2(2.36e-4)−
9.49e-3(2.81e-4)−
1.30e-2(2.30e-4)−
6.99e-3(2.69e-4)−
1.25e-2(6.90e-4)−
4.14e-3(1.50e-4)\(+\)
7.12e-3(2.01e-4)−
5.82e-3(3.95e-4)
LSMOP3
100
1.73e+1(2.98e+0)−
1.51e+0(3.65e-3)\(\approx \)
1.10e+1(2.37e+0)−
1.49e+0(4.45e-3)\(+\)
9.31e+2(6.59e+2)−
1.72e+0(1.31e+0)\(\approx \)
1.15e+0(1.03e+0)\(+\)
1.51e+0(2.98e-3)
200
2.13e+1(2.30e+0)−
1.55e+0(3.19e-3)−
1.45e+1(2.45e+0)−
1.53e+0(3.99e-3)\(+\)
6.64e+2(9.48e+2)−
1.41e+0(2.87e-1)\(+\)
9.50e-1(1.71e-1)\(+\)
1.55e+0(1.75e-3)
500
2.49e+1(1.27e+0)−
1.49e+0(3.44e-2)\(+\)
1.53e+1(3.31e+0)−
1.56e+0(1.57e-3)\(+\)
7.75e+2(9.56e+2)−
1.44e+0(2.65e-1)\(\approx \)
9.63e-1(3.33e-1)\(+\)
1.57e+0(1.27e-3)
1000
2.81e+1(1.34e+0)−
1.49e+0(2.13e-2)\(+\)
1.72e+1(2.76e+0)−
1.57e+0(1.32e-3)\(+\)
8.33e+2(7.74e+2)−
1.41e+0(3.06e-1)\(\approx \)
1.31e+0(1.32e+0)\(+\)
1.57e+0(4.81e-4)
2000
3.09e+1(9.98e-1)−
1.46e+0(1.79e-2)\(+\)
1.89e+1(3.25e+0)−
1.57e+0(8.97e-4)\(+\)
1.35e+3(1.13e+3)−
1.36e+0(3.15e-1)\(\approx \)
9.86e-1(5.27e-1)\(+\)
1.58e+0(5.73e-4)
5000
3.30e+1(6.96e-1)−
1.45e+0(1.04e-2)\(+\)
1.88e+1(2.93e+0)−
1.58e+0(4.24e-4)\(+\)
9.25e+2(8.84e+2)−
1.26e+0(3.06e-1)\(\approx \)
8.74e-1(5.00e-2)\(+\)
1.58e+0(1.37e-3)
LSMOP4
100
3.31e-1(1.77e-2)−
2.58e-1(5.30e-3)−
2.07e-1(9.02e-3)−
9.94e-2(1.16e-2)\(+\)
3.43e-1(4.46e-3)−
1.62e-1(1.79e-2)\(\approx \)
2.18e-1(3.43e-2)−
1.59e-1(5.31e-3)
200
2.16e-1(7.86e-3)−
1.61e-1(2.76e-3)−
1.35e-1(3.84e-3)−
7.49e-2(5.47e-3)\(+\)
2.27e-1(1.82e-3)−
9.74e-2(8.25e-3)−
1.59e-1(6.16e-3)−
9.23e-2(1.21e-3)
500
1.16e-1(2.12e-3)−
9.06e-2(1.56e-3)−
7.39e-2(1.26e-3)−
4.31e-2(1.88e-3)−
1.22e-1(5.03e-4)−
4.68e-2(3.41e-3)−
8.64e-2(1.43e-3)−
4.22e-2(4.71e-4)
1000
7.08e-2(1.70e-3)−
5.83e-2(1.71e-3)−
4.30e-2(7.26e-4)−
2.62e-2(7.41e-4)−
7.00e-2(3.69e-4)−
2.56e-2(1.43e-3)−
5.02e-2(5.05e-4)−
2.34e-2(2.49e-4)
2000
4.08e-2(1.40e-3)−
3.86e-2(2.05e-4)−
2.52e-2(7.19e-4)−
1.64e-2(4.01e-4)−
4.00e-2(3.53e-4)−
1.42e-2(8.02e-4)\(\approx \)
2.85e-2(2.74e-4)−
1.38e-2(3.09e-4)
5000
2.03e-2(6.84e-4)−
1.92e-2(2.67e-4)−
1.36e-2(2.46e-4)−
1.00e-2(3.94e-4)−
2.04e-2(4.92e-4)−
7.22e-3(3.96e-4)\(+\)
1.39e-2(2.32e-4)−
8.24e-3(4.73e-4)
LSMOP5
100
8.54e+0(1.37e+0)−
7.42e-1(2.28e-16)\(\approx \)
1.81e+0(2.26e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.36e+1(9.78e-1)−
3.94e+0(8.50e-1)−
6.37e-1(5.62e-2)\(+\)
7.42e-1(2.28e-16)
200
9.93e+0(9.03e-1)−
7.42e-1(2.28e-16)\(\approx \)
2.35e+0(2.44e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.52e+1(9.02e-1)−
4.01e+0(7.92e-1)−
7.44e-1(9.52e-2)\(\approx \)
7.42e-1(2.28e-16)
500
1.13e+1(8.43e-1)−
7.42e-1(2.28e-16)\(\approx \)
2.72e+0(2.99e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.68e+1(5.45e-1)−
2.01e+0(1.29e+0)−
1.27e+0(2.11e-1)−
7.42e-1(2.28e-16)
1000
1.22e+1(6.52e-1)−
7.42e-1(2.28e-16)\(\approx \)
2.84e+0(2.05e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.71e+1(4.65e-1)−
1.54e+0(1.39e+0)−
1.83e+0(2.78e-1)−
7.42e-1(2.28e-16)
2000
1.29e+1(8.44e-1)−
7.42e-1(2.28e-16)\(\approx \)
3.06e+0(2.45e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.76e+1(4.08e-1)−
1.18e+0(1.38e+0)−
2.37e+0(2.82e-1)−
7.42e-1(2.28e-16)
5000
1.38e+1(1.22e+0)−
7.42e-1(2.28e-16)\(\approx \)
3.29e+0(2.74e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.79e+1(1.92e-1)−
1.09e+0(1.09e+0)−
2.80e+0(2.99e-1)−
7.42e-1(2.28e-16)
LSMOP6
100
1.11e+0(2.92e-2)−
7.25e-1(1.28e-2)−
9.75e-1(5.98e-1)−
4.11e-1(5.50e-2)\(\approx \)
6.41e+2(6.02e+2)−
6.87e-1(1.55e-1)−
7.89e-1(2.62e-2)−
4.89e-1(1.12e-1)
200
9.34e-1(7.09e-3)−
7.03e-1(3.10e-3)−
6.55e-1(1.35e-2)−
3.64e-1(1.27e-2)\(\approx \)
1.29e+3(1.45e+3)−
5.56e-1(2.25e-1)−
8.00e-1(1.37e-2)−
3.87e-1(6.99e-2)
500
8.12e-1(1.95e-3)−
6.82e-1(5.85e-4)−
5.09e-1(5.42e-3)−
3.33e-1(1.48e-2)\(\approx \)
2.24e+3(2.37e+3)−
5.05e-1(2.59e-1)\(\approx \)
7.81e-1(3.87e-3)−
3.24e-1(8.61e-3)
1000
7.75e-1(2.64e-4)−
6.76e-1(2.77e-4)−
4.50e-1(2.34e-3)−
3.21e-1(9.48e-3)−
1.68e+3(2.27e+3)−
4.50e-1(2.42e-1)\(\approx \)
7.60e-1(2.28e-2)−
3.12e-1(2.66e-3)
2000
7.57e-1(8.16e-5)−
6.74e-1(1.45e-4)−
4.17e-1(2.02e-3)−
3.21e-1(1.00e-2)−
1.55e+3(1.28e+3)−
6.03e-1(2.40e-1)−
7.48e-1(2.88e-2)−
3.10e-1(5.27e-3)
5000
7.48e-1(1.97e-5)−
6.73e-1(7.16e-5)−
3.95e-1(2.39e-3)−
3.16e-1(1.16e-2)−
2.37e+3(2.50e+3)−
6.61e-1(1.15e-1)−
7.47e-1(2.63e-4)−
3.08e-1(5.41e-3)
LSMOP7
100
1.10e+4(3.23e+3)−
1.45e+0(3.98e-3)\(+\)
4.99e+2(1.37e+2)−
1.45e+0(5.59e-3)\(\approx \)
3.34e+4(6.36e+3)−
1.66e+3(7.30e+2)−
5.51e+0(2.92e+0)−
1.45e+0(2.13e-3)
200
1.57e+4(3.27e+3)−
1.49e+0(1.84e-3)\(+\)
9.31e+2(3.29e+2)−
1.48e+0(2.53e-3)\(\approx \)
4.34e+4(4.61e+3)−
2.22e+3(9.47e+2)−
1.28e+1(7.19e+0)−
1.49e+0(1.12e-3)
500
2.24e+4(4.51e+3)−
1.51e+0(8.37e-4)−
9.75e+2(3.85e+2)−
1.50e+0(7.88e-4)\(\approx \)
5.14e+4(3.20e+3)−
8.96e+2(9.74e+2)−
6.74e+1(6.91e+1)−
1.51e+0(6.42e-4)
1000
2.73e+4(5.76e+3)−
1.52e+0(2.78e-4)−
9.38e+2(2.77e+2)−
1.51e+0(4.90e-4)\(\approx \)
5.50e+4(3.14e+3)−
3.08e+2(7.50e+2)−
1.90e+2(1.63e+2)−
1.51e+0(4.89e-4)
2000
3.04e+4(5.03e+3)−
1.52e+0(1.58e-4)−
1.22e+3(3.42e+2)−
1.51e+0(2.77e-4)\(\approx \)
5.74e+4(1.28e+3)−
7.05e+1(1.92e+2)−
2.39e+2(2.93e+2)−
1.51e+0(1.03e-3)
5000
3.23e+4(3.64e+3)−
1.52e+0(7.44e-5)−
9.93e+2(2.28e+2)−
1.52e+0(1.30e-4)\(\approx \)
6.01e+4(1.43e+3)−
2.14e+2(6.54e+2)−
7.61e+2(6.99e+2)−
1.52e+0(3.91e-4)
LSMOP8
100
7.07e+0(8.72e-1)−
7.42e-1(2.28e-16)\(\approx \)
9.98e-1(1.36e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.19e+1(9.47e-1)−
2.83e+0(7.49e-1)−
6.45e-1(4.13e-2)\(+\)
7.42e-1(2.28e-16)
200
7.54e+0(8.16e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.05e+0(1.61e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.31e+1(6.49e-1)−
1.52e+0(1.07e+0)−
7.03e-1(5.91e-2)\(+\)
7.42e-1(2.28e-16)
500
8.78e+0(7.47e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.21e+0(1.49e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.43e+1(5.67e-1)−
8.01e-1(4.34e-1)\(\approx \)
1.17e+0(1.58e-1)−
7.42e-1(2.28e-16)
1000
9.20e+0(5.20e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.14e+0(1.67e-1)−
7.42e-1(2.28e-16)\(\approx \)
1.46e+1(2.98e-1)−
8.71e-1(7.61e-1)−
1.56e+0(1.75e-1)−
7.42e-1(2.28e-16)
2000
9.95e+0(7.37e-1)−
6.82e-1(5.18e-2)\(+\)
1.18e+0(8.55e-2)−
7.42e-1(2.28e-16)\(\approx \)
1.49e+1(3.82e-1)−
8.20e-1(7.22e-1)−
1.97e+0(2.13e-1)−
7.42e-1(2.28e-16)
5000
1.08e+1(1.08e+0)−
7.42e-1(2.28e-16)\(\approx \)
1.11e+0(9.75e-2)−
7.42e-1(2.28e-16)\(\approx \)
1.53e+1(2.24e-1)−
9.88e-1(1.21e+0)−
2.34e+0(2.59e-1)−
7.42e-1(2.28e-16)
LSMOP9
100
1.32e+1(4.07e+0)−
8.10e-1(1.14e-16)\(\approx \)
7.69e-1(1.46e-1)\(+\)
8.10e-1(1.14e-16)\(\approx \)
1.60e+1(2.86e+0)−
2.44e+0(8.12e-1)−
1.37e+0(5.97e-1)−
8.10e-1(1.14e-16)
200
2.01e+1(3.00e+0)−
8.10e-1(1.14e-16)−
7.44e-1(1.47e-1)\(\approx \)
8.10e-1(1.14e-16)−
2.71e+1(2.28e+0)−
1.83e+0(7.88e-1)−
5.85e-1(3.58e-2)\(+\)
7.70e-1(6.19e-2)
500
2.61e+1(5.73e+0)−
8.10e-1(1.14e-16)−
4.84e-1(2.36e-1)\(+\)
8.09e-1(1.42e-3)−
3.62e+1(1.62e+0)−
1.74e+0(8.71e-1)−
5.14e-1(1.11e-1)\(+\)
7.34e-1(9.86e-2)
1000
2.75e+1(2.97e+0)−
8.09e-1(1.35e-4)−
2.80e-1(1.91e-1)\(+\)
7.99e-1(2.09e-2)−
4.11e+1(1.96e+0)−
4.09e+0(1.23e+0)−
5.31e-1(7.61e-2)\(+\)
6.84e-1(1.22e-1)
2000
2.96e+1(2.77e+0)−
8.09e-1(2.00e-5)−
2.49e-1(1.76e-1)\(+\)
7.76e-1(4.22e-2)−
4.32e+1(1.17e+0)−
5.01e+0(2.17e+0)−
5.55e-1(4.55e-2)\(\approx \)
6.21e-1(1.37e-1)
5000
3.28e+1(3.38e+0)−
8.08e-1(1.97e-5)−
2.18e-1(1.99e-1)\(+\)
7.95e-1(1.10e-2)−
4.49e+1(6.11e-1)−
4.34e+0(1.30e+0)−
8.06e-1(1.44e-1)−
4.95e-1(4.80e-2)
\(+/-/\approx \)
0/54/0
8/32/14
5/48/1
11/18/25
0/54/0
8/32/14
13/38/3
 
\(+\)’, ‘−’ and ‘\(\approx \)’ indicate that the result is significantly better, significantly worse and statistically similar to that obtained by SLSEA, respectively. The best result in each row is presented in bold type
Table 2
The results of CCGDE3, LCSA, IM-MOEA, LSMOF, MOEA/DVA, DGEA, LMOCSO and SLSEA on 54 test instances
Problem
D
CCGDE3
LCSA
IM-MOEA
LSMOF
MOEA/DVA
DGEA
LMOCSO
SLSEA
LSMOP1
100
0.00e+0(0.00e+0)−
1.01e-1(1.29e-2)−
1.56e-2(1.01e-2)−
2.05e-1(3.94e-2)\(+\)
0.00e+0(0.00e+0)−
2.01e-1(5.53e-2)\(+\)
2.33e-1(2.39e-2)\(+\)
1.67e-1(4.20e-2)
200
0.00e+0(0.00e+0)−
1.04e-1(1.51e-2)−
1.85e-3(3.28e-3)−
1.63e-1(4.12e-2)\(\approx \)
0.00e+0(0.00e+0)−
2.23e-1(4.49e-2)\(+\)
1.63e-1(3.82e-2)\(\approx \)
1.83e-1(4.27e-2)
500
0.00e+0(0.00e+0)−
1.21e-1(3.46e-2)−
0.00e+0(0.00e+0)−
1.51e-1(4.16e-2)−
0.00e+0(0.00e+0)−
2.27e-1(3.34e-2)\(+\)
7.90e-2(2.21e-2)−
1.94e-1(4.90e-2)
1000
0.00e+0(0.00e+0)−
1.50e-1(5.74e-2)−
0.00e+0(0.00e+0)−
1.33e-1(3.57e-2)−
0.00e+0(0.00e+0)−
1.96e-1(7.20e-2)\(\approx \)
6.70e-3(7.23e-3)−
2.02e-1(4.51e-2)
2000
0.00e+0(0.00e+0)−
2.03e-1(6.65e-2)\(\approx \)
0.00e+0(0.00e+0)−
1.28e-1(1.90e-2)−
0.00e+0(0.00e+0)−
2.34e-1(1.71e-2)\(\approx \)
0.00e+0(0.00e+0)−
2.17e-1(4.19e-2)
5000
0.00e+0(0.00e+0)−
2.19e-1(4.93e-2)\(\approx \)
0.00e+0(0.00e+0)−
1.26e-1(2.39e-2)−
0.00e+0(0.00e+0)−
2.40e-1(7.05e-3)\(\approx \)
0.00e+0(0.00e+0)−
2.18e-1(3.47e-2)
LSMOP2
100
3.11e-1(8.28e-3)−
4.09e-1(4.79e-3)−
4.04e-1(6.79e-3)−
5.24e-1(3.74e-3)\(+\)
2.95e-1(2.56e-3)−
4.94e-1(3.58e-2)−
3.93e-1(5.11e-3)−
5.20e-1(5.00e-3)
200
4.03e-1(2.90e-3)−
4.88e-1(4.48e-4)−
4.48e-1(2.94e-3)−
5.46e-1(2.08e-3)−
3.91e-1(1.69e-3)−
5.44e-1(2.75e-3)−
4.62e-1(2.14e-3)−
5.48e-1(9.37e-4)
500
4.91e-1(1.08e-3)−
5.42e-1(4.01e-4)−
5.07e-1(1.50e-3)−
5.63e-1(8.68e-4)−
4.87e-1(1.13e-3)−
5.66e-1(1.20e-3)\(\approx \)
5.27e-1(1.10e-3)−
5.66e-1(4.48e-4)
1000
5.32e-1(5.31e-4)−
5.60e-1(3.98e-4)−
5.36e-1(6.70e-4)−
5.69e-1(5.33e-4)−
5.29e-1(1.07e-3)−
5.74e-1(4.67e-4)\(+\)
5.53e-1(6.21e-4)−
5.72e-1(7.86e-4)
2000
5.56e-1(3.53e-4)−
5.68e-1(3.22e-4)−
5.55e-1(5.69e-4)−
5.73e-1(4.21e-4)−
5.53e-1(1.07e-3)−
5.78e-1(3.66e-4)\(+\)
5.67e-1(4.00e-4)−
5.76e-1(5.16e-4)
5000
5.70e-1(3.44e-4)−
5.73e-1(3.95e-4)−
5.67e-1(3.08e-4)−
5.76e-1(4.08e-4)−
5.66e-1(1.30e-3)−
5.81e-1(3.24e-4)\(+\)
5.76e-1(2.85e-4)−
5.78e-1(5.61e-4)
LSMOP3
100
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
2.21e-3(9.91e-3)\(\approx \)
1.26e-2(2.32e-2)\(+\)
0.00e+0(0.00e+0)
200
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
500
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
5.45e-4(2.44e-3)\(\approx \)
0.00e+0(0.00e+0)
1000
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
2000
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
5000
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
LSMOP4
100
2.17e-1(1.65e-2)−
2.89e-1(5.94e-3)−
3.33e-1(8.26e-3)−
4.59e-1(1.30e-2)\(+\)
2.03e-1(4.78e-3)−
3.91e-1(1.90e-2)\(\approx \)
3.23e-1(3.82e-2)−
3.97e-1(5.24e-3)
200
3.32e-1(8.30e-3)−
3.87e-1(2.89e-3)−
4.13e-1(4.06e-3)−
4.88e-1(6.52e-3)\(+\)
3.16e-1(2.75e-3)−
4.63e-1(9.35e-3)−
3.93e-1(6.93e-3)−
4.69e-1(1.38e-3)
500
4.40e-1(2.15e-3)−
4.69e-1(1.82e-3)−
4.85e-1(1.32e-3)−
5.28e-1(2.43e-3)−
4.30e-1(1.09e-3)−
5.24e-1(4.26e-3)−
4.76e-1(1.63e-3)−
5.29e-1(6.55e-4)
1000
4.93e-1(1.70e-3)−
5.09e-1(2.10e-3)−
5.24e-1(9.92e-4)−
5.50e-1(1.05e-3)−
4.91e-1(1.31e-3)−
5.52e-1(1.93e-3)−
5.19e-1(6.12e-4)−
5.54e-1(4.12e-4)
2000
5.31e-1(1.53e-3)−
5.35e-1(3.86e-4)−
5.49e-1(1.08e-3)−
5.63e-1(5.93e-4)−
5.29e-1(1.23e-3)−
5.67e-1(1.06e-3)\(\approx \)
5.47e-1(3.70e-4)−
5.67e-1(4.81e-4)
5000
5.58e-1(7.09e-4)−
5.60e-1(4.27e-4)−
5.66e-1(3.34e-4)−
5.72e-1(5.53e-4)−
5.56e-1(1.41e-3)−
5.76e-1(5.59e-4)\(+\)
5.67e-1(3.04e-4)−
5.75e-1(5.73e-4)
LSMOP5
100
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
2.81e-2(2.30e-2)−
9.09e-2(4.27e-17)
200
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
500
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
2.29e-2(3.84e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
1000
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
2.93e-2(4.21e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
2000
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
6.61e-2(4.02e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
5000
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
6.82e-2(4.04e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
LSMOP6
100
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
3.23e-2(1.86e-2)\(+\)
0.00e+0(0.00e+0)−
5.13e-4(2.29e-3)−
2.14e-2(2.06e-2)\(\approx \)
7.90e-3(8.18e-3)
200
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
2.72e-2(6.94e-3)\(+\)
0.00e+0(0.00e+0)−
7.64e-3(1.56e-2)−
2.70e-3(5.83e-3)−
1.16e-2(7.90e-3)
500
0.00e+0(0.00e+0)−
2.31e-2(8.77e-4)−
0.00e+0(0.00e+0)−
4.46e-2(6.49e-3)\(+\)
0.00e+0(0.00e+0)−
4.38e-2(2.30e-2)\(\approx \)
1.76e-2(6.21e-3)−
3.35e-2(7.25e-3)
1000
2.73e-2(4.40e-4)−
5.57e-2(3.40e-4)−
3.03e-2(1.54e-3)−
7.96e-2(6.64e-3)\(+\)
3.03e-5(1.36e-4)−
6.93e-2(1.98e-2)\(\approx \)
4.48e-2(5.63e-3)−
7.24e-2(9.74e-3)
2000
5.91e-2(1.54e-4)−
7.28e-2(7.76e-5)−
5.94e-2(2.38e-4)−
1.03e-1(8.81e-3)\(+\)
0.00e+0(0.00e+0)−
8.33e-2(2.33e-2)−
6.42e-2(2.43e-3)−
9.06e-2(7.46e-3)
5000
7.82e-2(4.19e-5)−
8.34e-2(2.06e-5)−
7.82e-2(4.54e-5)−
1.09e-1(9.53e-3)\(+\)
0.00e+0(0.00e+0)−
8.35e-2(1.15e-4)−
7.95e-2(5.63e-4)−
1.02e-1(5.53e-3)
LSMOP7
100
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
200
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
500
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
1000
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
2000
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
5000
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)\(\approx \)
0.00e+0(0.00e+0)
LSMOP8
100
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
3.08e-2(1.91e-2)−
9.09e-2(4.27e-17)
200
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
6.68e-3(1.73e-2)−
7.43e-5(3.32e-4)−
9.09e-2(4.27e-17)
500
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
4.33e-2(4.12e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
1000
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
6.71e-2(3.98e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
2000
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
7.15e-2(3.67e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
5000
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
7.25e-2(3.72e-2)−
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)
LSMOP9
100
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)\(\approx \)
8.74e-2(2.14e-2)−
9.09e-2(4.27e-17)\(\approx \)
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
1.05e-2(1.10e-2)−
9.09e-2(4.27e-17)
200
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)−
9.32e-2(1.24e-2)−
9.09e-2(4.27e-17)−
0.00e+0(0.00e+0)−
3.06e-3(6.04e-3)−
5.93e-2(5.04e-3)−
9.62e-2(9.32e-3)
500
0.00e+0(0.00e+0)−
9.09e-2(4.27e-17)−
1.25e-1(2.21e-2)\(+\)
9.10e-2(1.08e-4)−
0.00e+0(0.00e+0)−
5.98e-3(1.73e-2)−
7.75e-2(1.96e-2)−
1.05e-1(1.87e-2)
1000
0.00e+0(0.00e+0)−
9.10e-2(9.78e-6)−
1.55e-1(2.36e-2)\(+\)
9.21e-2(2.62e-3)−
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
6.66e-2(7.68e-3)−
1.17e-1(2.63e-2)
2000
0.00e+0(0.00e+0)−
9.10e-2(1.38e-6)−
1.68e-1(2.15e-2)\(+\)
9.55e-2(6.86e-3)−
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
5.83e-2(6.24e-3)−
1.32e-1(3.11e-2)
5000
0.00e+0(0.00e+0)−
9.11e-2(1.99e-6)−
1.79e-1(2.42e-2)\(+\)
9.27e-2(1.39e-3)−
0.00e+0(0.00e+0)−
0.00e+0(0.00e+0)−
2.52e-2(9.85e-3)−
1.63e-1(1.22e-2)
\(+/-/\approx \)
0/42/12
0/27/27
4/38/12
10/18/26
0/42/12
7/27/20
2/39/13
 
\(+\)’, ‘−’ and ‘\(\approx \)’ indicate that the result is significantly better, significantly worse and statistically similar to that obtained by SLSEA, respectively. The best result in each row is presented in bold type

Performance on LSMOP problems

The overall performance is presented in Tables 1 and  2. Specifically, these tables show IGD results and HV results achieved by the five compared MOEAs on nine LSMOP problems with 1000, 2000, and 5000 decision variables, respectively. It can be observed from the first table that SLSEA has achieved the most best results, followed by DGEA, LMOCSO, and LSMOF, while MOEA/DVA has failed to achieve any best result. Notably, MOEA/DVA has consumed four times the maximum number of FEs for decision variable analysis. Thus, it has no additional resources for further optimization, which could explain its unsatisfactory performance. To be more specific, SLSEA has achieved the best results mainly on LSMOP4, LSMOP5, LSMOP6, and LSMOP8; DGEA has performed the best on LSMOP1 and LSMOP2; LSMOF has superior performance mainly on LSMOP7; LMOCSO has shown its advantages on LSMOP3 and LSMOP9. For LSMOP3, LSMOP5, LSMOP7, and LSMOP8, the differences of the achieved IGD results are significantly (e.g., over 100 times on LSMOP3 and LSMOP7) since these problems are challenging for large-scale MOEAs to converge to the PFs. For the rest test problems, the compared algorithms have achieved competitive results, since the difficulty of these problems is about the diversity maintenance to obtain evenly distributed results. As for Table 2, LSMOF has achieved the most best results, which can be attributed to the fact that LSMOF has achieved widespread solution sets in the objective space, leading to better HV values in comparison with SLSEA (refer to Fig. 7).
To visually show the dynamic behaviors of the compared algorithms on large-scale multiobjective optimization, Fig. 6 presents the convergence profiles of the compared algorithms in terms of IGD values. The convergence profiles of LSMOF and SLSEA are similar on LSMOP1, LSMOP3, LSMOP5, LSMOP6, LSMOP7, and LSMOP8; DGEA converges similarly to SLSEA on LSMOP1, LSMOP2, LSMOP4, and LSMOP6; while SLSEA has shown overall superiority over LMOCSO and MOEA/DVA on all the test instances. It can be concluded that the search behavior of SLSEA is similar to DGEA and LSMOF since they all have effective strategies for dealing with a large number of decision variables. Nevertheless, due to the balanced consideration of convergence enhancement, diversity maintenance, and local search, SLSEA has shown better versatility in solving different LSMOPs. In contrast, LSMOF and DGEA have achieved biased performances.
Moreover, the final non-dominated solutions achieved by the five compared algorithms on nine LSMOP problems in the run associated with the best IGD value are displayed in Fig. 7. It can be seen that SLSEA has behaved similarly to DGEA on LSMOP1, LSMOP2, LSMOP6, and LSMOP8, and it has behaved similarly to LSMOF on LSMOP2, LSMOP6, LSMOP 7, and LSMOP 8, which more or less agrees with the convergence profiles in Fig. 6. Both LSMOF and DGEA have used direction vectors in the decision space, while SLSEA has used the guidance of variable classifications. Thus they have shown effectiveness in large-scale multiobjective optimization. Nevertheless, due to the differences in offspring generation, SLSEA trends take advantage of LSMOF and DGEA to achieve a balanced performance, leading to its superiority over LSMOF and DGEA.
In the Supplementary Materials, we also give the convergence profiles of the compared algorithms on five representative LSMOP problems with a maximum number of \(5\times 10^6\) (25\(\times \) of the original settings). The results indicate that SLSEA tends to converge fast at the first \(1\times 10^6\) and stagnate around \(4\times 10^6\). Thus, SLSEA is more suitable for solving LSMOPs with limited FEs.
Table 3
The computation time consumed by five compared algorithms on four test instances
Algorithm
LSMOP1 (d = 5000)
LSMOP3 (d = 5000)
TREE1 (d = 15,000)
TREE3 (d = 30,000)
LSMOF
1.89e+3 (8.02e+1)
3.39e+3 (8.69e+2)
4.36e+3 (5.07e+1)
2.79e+3 (7.69e+2)
MOEA/DVA
9.56e+2 (2.51e+2)
1.25e+3 (2.68e+2)
3.24e+3 (1.03e+2)
1.02e+4 (3.48e+3)
DGEA
9.08e+3 (2.68e+3)
6.53e+3 (1.74e+3)
3.10e+3 (7.47e+1)
5.12e+3 (1.67e+3)
LMOCSO
3.90e+3 (6.52e+2)
5.37e+3 (1.18e+3)
1.69e+3 (1.51e+2)
2.95e+3 (7.89e+2)
SLSEA
3.94e+3 (1.01e+3)
3.72e+3 (8.37e+2)
1.19e+3 (3.11e+1)
2.64e+3 (6.39e+2)
The best result in each row is highlighted

Complexity analysis

As indicated in Algorithm 1, the computational complexity of SLSEA is mainly contributed by four functions, i.e., variable classification, diversity-related sampling, local search-related sampling, and environmental selection. For the first component, the computational complexity in terms of big O notation, as indicated in Algorithm 3, is \(O(2nm)+2\times O(n_b\times (n_s+mn_s+mn^2+mn_b^2)+n_b^2+2\times (2n_b)^2)\), i.e., \(O(mn+mn^2n_b+mn_b^3+n_bn_s+mn_sn_b+n_b^2)\) by eliminating the constant factors, which can be further simplified as \(O(mn^2n_b+mn_b^3+mn_bn_s+n_b^2)\). Algorithms 5 and 6 are similar, and their total computational complexity is \(O(mn+nlgn+n_b(n+1))\) \(+\) \(O(n_s+n_s(n+1))\). As for the last function, the computational complexity of the environmental selection in NSGA-II is \(O(m(n+nn_b+nn_s)^2)\) [8]. In summary, the computational complexity of SLSEA is \(O(mn^2n_b+mn_b^3+mn_bn_s+n_b^2)\) \(+\) \(O(mn+nlgn+n_b(n+1))\) \(+\) \(O(n_s+n_s(n+1))\) \(+\) \(O(m(n+nn_b+nn_s)^2)\). It can be simplified to \(O(m(n^2n_b+n_b^3+(n+nn_b+nn_s)^2)+n(lgn+n_b+n_s)+n_b^2)\), i.e., \(O(mn^2(n_b+(1+n_b+n_s)^2)+mn_b^3)\). The main computational complexity comes from the non-dominated sorting based environmental selection. It can be improved by using some divide-and-conquer strategies, e.g., selecting the non-dominated solutions from each subpopulation instead of selecting solutions from the combination of all solutions. Besides, if \(n_b\) and \(n_s\) are treated as constants, the final computational complexity of SLSEA will be \(O(mn^2)\) as that of NSGA-II. Thus, the efficiency of SLSEA in terms of computational complexity is acceptable.
Table 3 further displays the computation time of the five compared algorithms on two LSMOP problems and two TREE problems. It can be observed that SLSEA has consumed acceptable computation time for large-scale multiobjective optimization, which validates its efficiency in terms of computation time.

Conclusion

In this work, we have proposed to use efficient sampling strategies for large-scale multiobjective optimization. Three different sampling strategies are designed for convergence enhancement, diversity maintenance, and local search, respectively. Generally, the sampling strategies are economically cheap, robust, and versatile for scaling up evolutionary multiobjective optimization mainly for three reasons. First, the sampling operation is efficient compared to generic crossover and mutation operations. Even when the number of decision variables increases rapidly, the cost for generating promising candidate solutions increases linearly. The computation time comparison has validated its efficiency on LSMOPs with a large number of decision variables. Second, two issues have ensured the robustness of SLSEA in scaling up large-scale optimization. On the one hand, the comprehensive consideration of convergence enhancement, diversity maintenance, and local search enables SLSEA to have diverse behaviors of offspring generation. On the other hand, using probability distributions enhances the randomness of the generated offspring solutions for better global optimization. Third, due to the use of different probability distributions for different purposes, the behaviors of SLSEA can be easily adjusted for handling different types of LSMOPs. In other words, if an expert has prior knowledge about the landscape of an LSMOP, a specific sampling strategy can be preferred to achieve tailored behaviors for solving specific LSMOPs. In summary, efficient sampling strategies are promising for large-scale multiobjective optimization, and more effective sample strategies are highly desirable.
In addition to the two convergence- and diversity-related sampling strategies, we also emphasize the role of local search in this work. Ablation studies using different variants of SLSEA have indicated the importance of local search, which is somehow ignored in most existing large-scale MOEAs. Notably, the variant of SLSEA with local search-related sampling has behaved similarly to LSMOF. These two algorithms can converge to a certain degree at the early stage of evolution (refer to [35]). Though the local search-related sampling has contributed significantly to the performance of SLSEA, the combination of the three sampling strategies is expected to perform robustly for large-scale multiobjective optimization. All in all, the experimental results on various LSMOPs have validated the effectiveness and efficiency of SLSEA.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Nos. U20A20306 and 61906081), and the National Key Research and Development Program of China (No. 2022YFB2403803). Y. Jin is supported by an Alexander von Humboldt Professorship for AI endowed by the German Federal Ministry of Education and Research.

Declarations

Conflict of interest

The authors have no conflicts of interest to declare.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Supplementary Information

Below is the link to the electronic supplementary material.
Fußnoten
1
The lower and upper boundaries of each decision variable are normalized to 0 and 1, respectively. If any sampled decision value exceeds the lower/upper boundary, it will be truncated within the boundaries.
 
Literatur
1.
Zurück zum Zitat Tian Y, Yang S, Zhang X (2020) An evolutionary multiobjective optimization based fuzzy method for overlapping community detection. IEEE Trans Fuzzy Syst 28(11):2841–2855 Tian Y, Yang S, Zhang X (2020) An evolutionary multiobjective optimization based fuzzy method for overlapping community detection. IEEE Trans Fuzzy Syst 28(11):2841–2855
2.
Zurück zum Zitat Zhang X, Tian Y, Cheng R, Jin Y (2015) An efficient approach to nondominated sorting for evolutionary multiobjective optimization. IEEE Trans Evol Comput 19(2):201–213 Zhang X, Tian Y, Cheng R, Jin Y (2015) An efficient approach to nondominated sorting for evolutionary multiobjective optimization. IEEE Trans Evol Comput 19(2):201–213
3.
Zurück zum Zitat Coello CAC (2006) Evolutionary multi-objective optimization: a historical view of the field. IEEE Comput Intell Mag 1(1):28–36 Coello CAC (2006) Evolutionary multi-objective optimization: a historical view of the field. IEEE Comput Intell Mag 1(1):28–36
4.
Zurück zum Zitat Schaffer JD (1985) Multiple objective optimization with vector evaluated genetic algorithms. Lawrence Erlbaum Associates. Inc., PublishersMATH Schaffer JD (1985) Multiple objective optimization with vector evaluated genetic algorithms. Lawrence Erlbaum Associates. Inc., PublishersMATH
5.
Zurück zum Zitat Fonseca CM, Fleming PJ et al (1993) Genetic algorithms for multiobjective optimization: formulation discussion and generalization. 5th International Conference on Genetic Algorithms. Citeseer, San Francisco, CA, pp 416–423 Fonseca CM, Fleming PJ et al (1993) Genetic algorithms for multiobjective optimization: formulation discussion and generalization. 5th International Conference on Genetic Algorithms. Citeseer, San Francisco, CA, pp 416–423
6.
Zurück zum Zitat Srinivas N, Deb K (1994) Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol Comput 2(3):221–248 Srinivas N, Deb K (1994) Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol Comput 2(3):221–248
7.
Zurück zum Zitat Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3:257–271 Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3:257–271
8.
Zurück zum Zitat Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197 Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
9.
Zurück zum Zitat Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11:712–731 Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11:712–731
10.
Zurück zum Zitat Zitzler E, Künzli S (2004) Indicator-based selection in multiobjective search. In: Yao X, Burke EK, Lozano JA, Smith J, Merelo-Guervós JJ, Bullinaria JA, Rowe JE, Tiňo P, Kabán A, Schwefel HP (eds) Parallel problem solving from nature—PPSN VIII. Springer, Berlin, pp 832–842 Zitzler E, Künzli S (2004) Indicator-based selection in multiobjective search. In: Yao X, Burke EK, Lozano JA, Smith J, Merelo-Guervós JJ, Bullinaria JA, Rowe JE, Tiňo P, Kabán A, Schwefel HP (eds) Parallel problem solving from nature—PPSN VIII. Springer, Berlin, pp 832–842
11.
Zurück zum Zitat Zhou A, Qu B, Li H, Zhao S, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evol Comput 1(1):32–49 Zhou A, Qu B, Li H, Zhao S, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evol Comput 1(1):32–49
12.
Zurück zum Zitat Li B, Li J, Tang K, Yao X (2015) Many-objective evolutionary algorithms: a survey. ACM Comput Surv 48(1) Li B, Li J, Tang K, Yao X (2015) Many-objective evolutionary algorithms: a survey. ACM Comput Surv 48(1)
13.
Zurück zum Zitat Ishibuchi H, Hitotsuyanagi Y, Tsukamoto N, Nojima Y (2010) Many-objective test problems to visually examine the behavior of multiobjective evolution in a decision space. In: Schaefer R, Cotta C, Kołodziej J, Rudolph G (eds) Parallel problem solving from nature, PPSN XI. Springer, Berlin, pp 91–100 Ishibuchi H, Hitotsuyanagi Y, Tsukamoto N, Nojima Y (2010) Many-objective test problems to visually examine the behavior of multiobjective evolution in a decision space. In: Schaefer R, Cotta C, Kołodziej J, Rudolph G (eds) Parallel problem solving from nature, PPSN XI. Springer, Berlin, pp 91–100
14.
Zurück zum Zitat Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 17:721–736 Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 17:721–736
15.
Zurück zum Zitat Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans Evol Comput 18:577–601 Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans Evol Comput 18:577–601
16.
Zurück zum Zitat Hong W, Tang K, Zhou A, Ishibuchi H, Yao X (2018) A scalable indicator-based evolutionary algorithm for large-scale multiobjective optimization. IEEE Trans Evol Comput 23(3):525–537 Hong W, Tang K, Zhou A, Ishibuchi H, Yao X (2018) A scalable indicator-based evolutionary algorithm for large-scale multiobjective optimization. IEEE Trans Evol Comput 23(3):525–537
17.
Zurück zum Zitat Zhang X, Tian Y, Jin Y (2015) A knee point driven evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 19:761–776 Zhang X, Tian Y, Jin Y (2015) A knee point driven evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 19:761–776
18.
Zurück zum Zitat Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20:773–791 Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20:773–791
19.
Zurück zum Zitat Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76 Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76
20.
Zurück zum Zitat Chen H, Tian Y, Pedrycz W, Wu G, Wang R, Wang L (2019) Hyperplane assisted evolutionary algorithm for many-objective optimization problems. IEEE Trans Cybern 50(7):3367–3380 Chen H, Tian Y, Pedrycz W, Wu G, Wang R, Wang L (2019) Hyperplane assisted evolutionary algorithm for many-objective optimization problems. IEEE Trans Cybern 50(7):3367–3380
22.
Zurück zum Zitat Tian Y, Zhang X, Wang C, Jin Y (2019) An evolutionary algorithm for large-scale sparse multiobjective optimization problems. IEEE Trans Evol Comput 24(2):380–393 Tian Y, Zhang X, Wang C, Jin Y (2019) An evolutionary algorithm for large-scale sparse multiobjective optimization problems. IEEE Trans Evol Comput 24(2):380–393
23.
Zurück zum Zitat He C, Cheng R, Zhang C, Tian Y, Chen Q, Yao X (2020) Evolutionary large-scale multiobjective optimization for ratio error estimation of voltage transformers. IEEE Trans Evol Comput 24(5):868–881 He C, Cheng R, Zhang C, Tian Y, Chen Q, Yao X (2020) Evolutionary large-scale multiobjective optimization for ratio error estimation of voltage transformers. IEEE Trans Evol Comput 24(5):868–881
24.
Zurück zum Zitat Wang H, Jin Y, Yao X (2016) Diversity assessment in many-objective optimization. IEEE Trans Cybern 47(6):1510–1522 Wang H, Jin Y, Yao X (2016) Diversity assessment in many-objective optimization. IEEE Trans Cybern 47(6):1510–1522
25.
Zurück zum Zitat Qian H, Yu Y (2017) Solving high-dimensional multi-objective optimization problems with low effective dimensions. AAAI, vol 31. AAAI Press, New York, pp 875–881 Qian H, Yu Y (2017) Solving high-dimensional multi-objective optimization problems with low effective dimensions. AAAI, vol 31. AAAI Press, New York, pp 875–881
26.
Zurück zum Zitat He C, Cheng R (2021) Population sizing of evolutionary large-scale multiobjective optimization. In: Ishibuchi H, Zhang Q, Cheng R, Li K, Li H, Wang H, Zhou A (eds) Evolutionary multi-criterion optimization. Springer International Publishing, Cham, pp 41–52 He C, Cheng R (2021) Population sizing of evolutionary large-scale multiobjective optimization. In: Ishibuchi H, Zhang Q, Cheng R, Li K, Li H, Wang H, Zhou A (eds) Evolutionary multi-criterion optimization. Springer International Publishing, Cham, pp 41–52
27.
Zurück zum Zitat Antonio LM, Coello CAC (2017) Coevolutionary multi-objective evolutionary algorithms: a survey of the state-of-the-art. IEEE Trans Evol Comput 22:851–865 Antonio LM, Coello CAC (2017) Coevolutionary multi-objective evolutionary algorithms: a survey of the state-of-the-art. IEEE Trans Evol Comput 22:851–865
28.
Zurück zum Zitat Liu S, Lin Q, Tian Y, Tan KC (2021) A variable importance-based differential evolution for large-scale multiobjective optimization. IEEE Trans Cybern 52(12):13048–13062 Liu S, Lin Q, Tian Y, Tan KC (2021) A variable importance-based differential evolution for large-scale multiobjective optimization. IEEE Trans Cybern 52(12):13048–13062
30.
Zurück zum Zitat Ma X, Liu F, Qi Y, Wang X, Li L, Jiao L, Yin M, Gong M (2016) A multiobjective evolutionary algorithm based on decision variable analyses for multi-objective optimization problems with large scale variables. IEEE Trans Evol Comput 20:275–298 Ma X, Liu F, Qi Y, Wang X, Li L, Jiao L, Yin M, Gong M (2016) A multiobjective evolutionary algorithm based on decision variable analyses for multi-objective optimization problems with large scale variables. IEEE Trans Evol Comput 20:275–298
31.
Zurück zum Zitat Ma L, Huang M, Yang S, Wang R, Wang X (2021) An adaptive localized decision variable analysis approach to large-scale multiobjective and many-objective optimization. IEEE Trans Cybern 52(7):6684–6696 Ma L, Huang M, Yang S, Wang R, Wang X (2021) An adaptive localized decision variable analysis approach to large-scale multiobjective and many-objective optimization. IEEE Trans Cybern 52(7):6684–6696
32.
Zurück zum Zitat Zhang X, Tian Y, Jin Y, Cheng R (2016) A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization. IEEE Trans Evol Comput 22:97–112 Zhang X, Tian Y, Jin Y, Cheng R (2016) A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization. IEEE Trans Evol Comput 22:97–112
33.
Zurück zum Zitat Chen H, Cheng R, Wen J, Li H, Weng J (2020) Solving large-scale many-objective optimization problems by covariance matrix adaptation evolution strategy with scalable small subpopulations. Inf Sci 509:457–469MathSciNetMATH Chen H, Cheng R, Wen J, Li H, Weng J (2020) Solving large-scale many-objective optimization problems by covariance matrix adaptation evolution strategy with scalable small subpopulations. Inf Sci 509:457–469MathSciNetMATH
34.
Zurück zum Zitat Zille H, Ishibuchi H, Mostaghim S, Nojima Y (2018) A framework for large-scale multi-objective optimization based on problem transformation. IEEE Trans Evol Comput 22:260–275 Zille H, Ishibuchi H, Mostaghim S, Nojima Y (2018) A framework for large-scale multi-objective optimization based on problem transformation. IEEE Trans Evol Comput 22:260–275
35.
Zurück zum Zitat He C, Li L, Tian Y, Zhang X, Cheng R, Jin Y, Yao X (2019) Accelerating large-scale multiobjective optimization via problem reformulation. IEEE Trans Evol Comput 23(6):949–961 He C, Li L, Tian Y, Zhang X, Cheng R, Jin Y, Yao X (2019) Accelerating large-scale multiobjective optimization via problem reformulation. IEEE Trans Evol Comput 23(6):949–961
36.
Zurück zum Zitat Tian Y, Zheng X, Zhang X, Jin Y (2020) Efficient large-scale multiobjective optimization based on a competitive swarm optimizer. IEEE Trans Cybern 50(8):3696–3708 Tian Y, Zheng X, Zhang X, Jin Y (2020) Efficient large-scale multiobjective optimization based on a competitive swarm optimizer. IEEE Trans Cybern 50(8):3696–3708
37.
Zurück zum Zitat He C, Cheng R, Yazdani D (2022) Adaptive offspring generation for evolutionary large-scale multiobjective optimization. IEEE Trans Syst Man Cybern 52(2):786–798 He C, Cheng R, Yazdani D (2022) Adaptive offspring generation for evolutionary large-scale multiobjective optimization. IEEE Trans Syst Man Cybern 52(2):786–798
38.
Zurück zum Zitat Qin S, Sun C, Jin Y, Tan Y, Fieldsend J (2021) Large-scale evolutionary multiobjective optimization assisted by directed sampling. IEEE Trans Evol Comput 25(4):724–738 Qin S, Sun C, Jin Y, Tan Y, Fieldsend J (2021) Large-scale evolutionary multiobjective optimization assisted by directed sampling. IEEE Trans Evol Comput 25(4):724–738
39.
Zurück zum Zitat Whitley D (1994) A genetic algorithm tutorial. Stat Comput 4(2):65–85 Whitley D (1994) A genetic algorithm tutorial. Stat Comput 4(2):65–85
40.
Zurück zum Zitat He C, Tian Y, Jin Y, Zhang X, Pan L (2017) A radial space division based many-objective optimization evolutionary algorithm. Appl Soft Comput 61:603–621 He C, Tian Y, Jin Y, Zhang X, Pan L (2017) A radial space division based many-objective optimization evolutionary algorithm. Appl Soft Comput 61:603–621
41.
Zurück zum Zitat Pan L, He C, Ye T, Su Y, Zhang X (2017) A region division based diversity maintaining approach for many-objective optimization. Integrat Comput-Aided Eng 24(3):279–296 Pan L, He C, Ye T, Su Y, Zhang X (2017) A region division based diversity maintaining approach for many-objective optimization. Integrat Comput-Aided Eng 24(3):279–296
42.
Zurück zum Zitat Haynes W (2013) Wilcoxon rank sum test. In Encyclopedia of systems biology. Springer, 2354–2355 Haynes W (2013) Wilcoxon rank sum test. In Encyclopedia of systems biology. Springer, 2354–2355
43.
Zurück zum Zitat Tian Y, Cheng R, Zhang X, Jin Y (2017) PlatEMO: a MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput Intell Mag 12:73–87 Tian Y, Cheng R, Zhang X, Jin Y (2017) PlatEMO: a MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput Intell Mag 12:73–87
44.
Zurück zum Zitat Coello CAC, Cortés NC (2005) Solving multiobjective optimization problems using an artificial immune system. Genet Program Evolvable Mach 6(2):163–190 Coello CAC, Cortés NC (2005) Solving multiobjective optimization problems using an artificial immune system. Genet Program Evolvable Mach 6(2):163–190
45.
Zurück zum Zitat While L, Hingston P, Barone L, Huband S (2006) A faster algorithm for calculating hypervolume. IEEE Trans Evol Comput 10:29–38 While L, Hingston P, Barone L, Huband S (2006) A faster algorithm for calculating hypervolume. IEEE Trans Evol Comput 10:29–38
46.
Zurück zum Zitat Ishibuchi H, Imada R, Setoguchi Y, Nojima Y (2018) How to specify a reference point in hypervolume calculation for fair performance comparison. Evol Comput 26(3):411–440 Ishibuchi H, Imada R, Setoguchi Y, Nojima Y (2018) How to specify a reference point in hypervolume calculation for fair performance comparison. Evol Comput 26(3):411–440
47.
Zurück zum Zitat Ishibuchi H, Imada R, Masuyama N, Nojima Y (2019) Comparison of hypervolume, igd and igd+ from the viewpoint of optimal distributions of solutions. Evolutionary multi-criterion optimization. Springer International Publishing, Cham, pp 332–345 Ishibuchi H, Imada R, Masuyama N, Nojima Y (2019) Comparison of hypervolume, igd and igd+ from the viewpoint of optimal distributions of solutions. Evolutionary multi-criterion optimization. Springer International Publishing, Cham, pp 332–345
48.
Zurück zum Zitat Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. Springer, London, pp 105–145MATH Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. Springer, London, pp 105–145MATH
49.
Zurück zum Zitat Simon Huband LB, Hingston P, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506 Simon Huband LB, Hingston P, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506
51.
Zurück zum Zitat Li H, Zhang Q (2009) Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans Evol Comput 13:284–302 Li H, Zhang Q (2009) Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans Evol Comput 13:284–302
Metadaten
Titel
Evolutionary multiobjective optimization via efficient sampling-based offspring generation
verfasst von
Cheng He
Lianghao Li
Ran Cheng
Yaochu Jin
Publikationsdatum
24.02.2023
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems / Ausgabe 5/2023
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-023-00990-z

Weitere Artikel der Ausgabe 5/2023

Complex & Intelligent Systems 5/2023 Zur Ausgabe

Premium Partner