Skip to main content
Top
Published in: Complex & Intelligent Systems 1/2024

Open Access 18-07-2023 | Original Article

A many-objective evolutionary algorithm with metric-based reference vector adjustment

Authors: Xujian Wang, Fenggan Zhang, Minli Yao

Published in: Complex & Intelligent Systems | Issue 1/2024

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

To deal with the inconsistency of the distribution of reference vectors (RVs) and the shape of Pareto front in decomposition-based multiobjective evolutionary algorithms, many literatures proposed to adjust RVs in evolutionary process. However, most of the existing algorithms adjust RVs either in each generation or at a fixed frequency, not considering the evolving information of the population. To tackle this issue, we propose a many-objective evolutionary algorithm with metric-based reference vector adjustment named MBRA. To adjust RVs periodically and conditionally, we use the improvement rate of convergence degree of subproblems computed through d1 distance to reflect the whole convergence of the population. Only when the subproblems are considered convergent on the whole, are RVs allowed to be adjusted: delete the invalid RVs associated with no solutions and add new ones using the farthest solution in the subregion of the most crowded RVs for diversity. Besides, in mating selection and environmental selection, we adopt the sum of objectives and fitness-based sorting based on \(I_{\varepsilon + }\) as the secondary principles to promote convergence, respectively. Validated through extensive experiments, MBRA is effective and competitive on many-objective optimization problems, especially for problems with irregular Pareto fronts.
Notes

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s40747-023-01161-w.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

In various aspects of life, such scenarios are encountered commonly, where multiple objectives need to be considered simultaneously [14]. There is no single solution that can meet all these objectives at the same time due to their conflict, and the improvement in some objectives will deteriorate others. Researchers define this problem as multiobjective optimization problems (MOPs), which can modelled as follows:
$$ \begin{gathered} {\rm Minimize}\quad \;{\mathbf{F}}({\mathbf{x}}) = (f_{1} ({\mathbf{x}}), \ldots ,f_{m} ({\mathbf{x}})) \hfill \\ {\text{Subject to:}}\quad {\mathbf{x}} \in \Omega \hfill \\ \end{gathered} $$
(1)
where \({\mathbf{x}} = (x_{1} , \ldots ,x_{D} ) \in R^{D}\) represents a decision vector in the decision space Ω, D is the number of decision variables, F(x) is an objective vector containing m objectives. In MOP, y is said to be Pareto-dominated by x, denoted as \({\mathbf{x}} \prec {\mathbf{y}}\), if and only if \(f_{i} ({\mathbf{x}}) \le f_{i} ({\mathbf{y}})\) holds for each objective, and F(x) ≠ F(y). All the Pareto optimal solutions, which are not Pareto-dominated by any other solutions, constitute Pareto optimal set (PS), and the corresponding vectors of these solutions in the objective space constitute Pareto optimal front (PF).
Plenty of multiobjective evolutionary algorithms (MOEAs) have shown their effectiveness on MOPs [5, 6], which can be classified into three classes, including dominance-based MOEAs, indicator-based MOEAs, and decomposition-based MOEAs. However, their effectiveness is negatively influenced when they are applied to solve MOPs having four or more objectives, which is categorized as many-objective optimization problems (MaOPs) [7]. To make MOEAs suitable to solve MaOPs, efforts has been made as follows.
Efforts in dominance-based algorithms contains two aspects. One aspect is to remedy dominance relation. Since the traditional definition of Pareto dominance fails to distinguish solutions in MaOPs, the most intuitive way is to define new dominance relation, relaxing or tightening the original dominance area, such as \(\varepsilon\)-dominance [8], preference order ranking [9, 10], L-optimality [11], fuzzy dominance [12, 13], and grid-based dominance [14, 15]. Recently, a new method to enlarge dominance area called generalization of Pareto optimality (GPO) [16, 17] was proposed which enhanced the scalability of existing Pareto-based algorithms. Although the modified dominance relation increased the selection pressure, they incurred another intractable issue that specific parameters in these algorithms need to be tuned carefully. Besides, solutions in these algorithms are inclined to converge to certain parts of PF. The other aspect is to leverage diversity management. One of the most representative algorithms was shift-based density estimation method (SDE) [18], which shifted the solutions with poor convergence to the crowded area. Thus, the poorly-converged solutions would be discarded. Several new dominance relations inspired by decomposition-based algorithms and combined with the framework of NSGA-II [19] have shown their effectiveness on MaOPs. For example, reference point-based dominance (RP-dominance) [20] created a strict partial order on the set of nondominated solutions using a set of well-distributed reference points. Strengthened dominance relation (SDR) [21] used a tailored niching technique to balance convergence and diversity of the nondominated solution set. Two aspects about SDR, i.e., convergence degree and niche size, were studied, and controlled SDR (CSDR) was proposed in [22].
Indicator-based algorithms employ performance indicators, such as IGD [23], HV [24], R2 [25, 26], and \(I_{{\varepsilon^{ + } }}\) [27, 28] to select solutions during the evolutionary process. The high computational burden hinders the wide application of such approaches, especially for HV in high-dimensional objective space, even though there is Monte Carlo sampling method. A recently proposed indicator IGD-NS [29] distinguished the non-dominated solutions which did not have any contribution in the calculation of IGD, and had been applied in AR-MOEA [30]. Although AR-MOEA can solve problems with regular and irregular PF simultaneously, high computational burden is still its shortcoming to be considered. ISDE+ [31] is a new indicator combining the sum of objectives and SDE, which is expected to improve the selection pressure towards PF while maintaining the diversity of the population. Readers can refer to [32, 33] for more details about indicator-based algorithms, including their strengths, weaknesses, future trends and potential research directions.
Apart from the above two classes of algorithms, decomposition-based algorithms are also commonly utilized in MOPs and MaOPs, which decompose MOPs into several scalar subproblems [34] or a number of simple multiobjective subproblems [35, 36]. In decomposition-based algorithms, it is assumed that if reference vectors distribute uniformly, the optimal solutions will also distribute uniformly with good diversity [37, 38]. Motivated by this assumption, several works have been reported aiming at obtaining reference vectors with uniform distribution, such as Das and Dennis’s simplex-lattice design [39], two-layer [40] and multi-layer [41] simplex lattice design, uniform random sampling [42], and uniform design [43]. As the research progresses, it is recognized that the precondition of this assumption is that the PF shape should fit the distribution of reference vectors [44]. Namely, only when the PF shape is also simplex like, can the diversity of reference vectors lead to the diversity of the Pareto optimal solutions. In addition, it is reported in [45] that the performance of uniformly distributed reference vectors is closely related to the PF shape.
To alleviate negative effect of the distribution of reference vectors, an intuitive approach is to adjust reference vectors in the evolutionary process. Several works have been reported along this line, and the detailed review can be found in [46]. In reference vector adjustment algorithms, there are several important issues to be solved. Although some of these issues have been solved to some extent, there is still room for further study about this type of algorithm, for example, when to trigger adjustment operation. For problems with regular PF, if we want to obtain an approximation set with uniformly distributed solutions, a set of uniform reference vectors is enough. However, for problems with irregular PF, uniformly distributed reference vectors cannot guarantee an approximation set with uniform solutions, and hence, it is required to adjust reference vectors so that the invalid or unevenly distributed reference vectors can be adjusted in time. However, for most of the existing algorithms with reference vector adjustment, they adjust reference vectors in each generation or at a fixed frequency, where the frequency of adjustment is considered in isolation from the status of the evolving population. That is to say, as long as the frequency is reached, the reference vectors will be adjusted, even though there is still improvement room for convergence under the current reference vectors. This unexpected adjustment can damage the convergence of the population. This motivates us to adjust reference vectors based on the improvement rate of subproblems corresponding to each reference vector, and only when the improvement rate of convergence is smaller than the threshold, can reference vectors be adjusted.
To solve the above issue, we propose a many-objective evolutionary algorithm with metric-based reference vector adjustment named MBRA, where the reference vectors are adjusted periodically and conditionally, related to not only the frequency, but also the designed condition. This metric-based adjustment considers the convergence degree of subproblems and triggers adjustment operation when the population is judged to be convergent on the whole. The innovations of MBRA are highlighted from three aspects.
(1)
A metric-based reference vector adjustment timing strategy to trigger adjustment periodically and conditionally is designed. In this strategy, when to trigger the adjustment operation depends on the improvement rate of subproblem convergence, and only when the subproblems are considered convergent on the whole, are reference vectors allowed to be adjusted.
 
(2)
A new delete-and-add method to adjust reference vectors is designed in MBRA. In this method, first the invalid reference vectors which have no associated solutions are deleted, and then new ones are generated one by one according to the farthest solution in the subregion of the reference vector which has the most associated solutions for diversity.
 
(3)
In environmental selection, fitness-based sorting is utilized as an auxiliary selection principle when non-dominated sorting fails to select solutions. In fitness-based sorting, solutions are sorted in each subregion based on their fitness computed through a convergence indicator \(I_{\varepsilon + }\), which increases the selection pressure. In mating selection, Pareto dominance and the sum of objectives are used to determine a mating pool.
 
The structure of the text is as follows. Sect. "Related work" introduces some related work. Sect. "Proposed algorithm" shows framework of MBRA and expounds main procedures step by step. Sect. "Experimental setup" introduces experimental setups. Sect. "Results and discussions" is the results and discussions about empirical experiments. At last, Sect. “Conclusions” summarizes our work and provides some remarks on research directions in the future.
First, we generalize two mainly adopted adjustment timing strategies. Then, we summarize several classes of the algorithms for problems with irregular PFs, and introduce several representative algorithms with reference vector adjustment. Sequentially, we describe a convergence metric that will be utilized in MBRA.

Adjustment timing strategies

In recent years, many reference vector adjustment algorithms have appeared, and they mainly adopt two adjustment timing strategies. One strategy is that they adjust reference vectors in each generation, such as [47, 48]. This strategy has two potential shortcomings. First, the adjustment in each generation brings about high computational burden. Second, for some problems, especially those with regular PFs, the convergence may be deteriorated since solutions evolve along the changing and unstable directions resulting from the frequently adjusted reference vectors. The other strategy is that they adjust reference vectors at a fixed frequency, some without starting and/or ending time, such as [49, 50], and some with starting and/or ending time, such as [37, 38, 51, 52]. Compared with the first adjustment timing strategy, this strategy reduces the times of adjustment and the computational burden. However, it still does not consider the status of the evolving population. To tackle this issue, we use the designed metric to trigger adjustment operation in MBRA.

Algorithms with reference vector adjustment

Recent years have witnessed plenty of research to solve MaOPs with irregular PFs. An intuitive way to tackle the mismatch between the distribution of reference vectors and the PF shape is to delete invalid reference vectors and generate new ones during the search process, such as A-NSGA-III [47], RVEAa [49], and iRVEA [48]. Besides, solutions in the current population or in the external archive can also be utilized to adjust reference vectors after a given number of generations, such as MOEA/D-AM2M [36], MOEA/D-AWA [37], AdaW [38], and AR-MOEA [30]. For this type of algorithms, when to adjust reference vectors is not easy to determine. Frequent adjustment will slow down convergence speed, and the adjustment after a number of fixed generations may loss promising solutions. Recently, machine learning techniques, such as self-organizing mapping (SOM) [53] and growing neural gas network (GNG) [54, 55] are combined with MOEAs. They can learn the topology of approximated PFs and therefore there is no need to determine which and when the reference vectors should be adjusted. However, the computational cost of this type of algorithms is expensive due to the network training process, and the performance of adjustment is sensitive to the trained network. Apart from the above three types of algorithms, clustering-based algorithms have also been applied to deal with problems with various PF shapes, such as EMyO/C [56] and CA-MOEA [57]. Despite the great success of the clustering-based approaches, it is nontrivial to specify the number of clusters beforehand, nor is it easy to determine which solution should be selected from each cluster to balance convergence and diversity. It is also hard to determine the frequency of clustering operation since frequent clustering may slow down the convergence. Next, we briefly introduce several algorithms with reference vector adjustment.
(1)
A-NSGA-III
In A-NSGA-III, reference points can be adjusted adaptively in each generation. It adds and deletes reference points according to the number and position of the associated solutions. To be specific, first associate solutions with their nearest reference point. Then, add reference points near a crowded reference point through a m−1 simplex. The reference point addition is affected by the simplex length, and is not allowed near the corners. Besides, the above way of adding new reference points may become ineffective if the objective space is very large, as it is unpractical to expect that newly generated solutions can always locate near the newly added reference points. As a result, newly added reference points may remain inactive even if they locate in a promising region.
 
(2)
RVEAa and iRVEA.
RVEAa and iRVEA are two algorithms with reference vector adjustment based on RVEA. During the evolutionary process, RVEAa substitutes random reference vectors for invalid reference vectors. Although this adjustment approach is straightforward, it cannot guarantee that the new reference vectors will intersect with PF due to the randomness. To make up this deficiency of RVEAa, iRVEA creates a new reference vector using the most different solution from the existing valid reference vectors, rather than in a random way. Apart from the difference in adjustment approach, their adjustment frequency is also different. In RVEAa, reference vectors are adjusted every 10% of maximum generations Gmax, while in iRVEA, they are adjusted in each generation.
 
(3)
MOEA/D-AWA
MOEA/D-AWA is a decomposition-based multiobjective evolutionary algorithm which uses non-dominated solutions to aid the deletion and addition of weights. Specifically, it first removes part of the most crowded solutions evaluated through the vicinity distance, and then, obtains new subproblems according to the least crowded solutions. The adjustment in MOEA/D-AWA occurs every 100 generations after 80% of Gmax. It can optimize problems with disconnected or sharp peak and low tail PFs, but it needs several extra parameters to be predefined, which poses inconvenience to the application of this algorithm. Besides, the external archive also puts extra burden to the computation.
 
(4)
AdaW.
AdaW is a decomposition-based algorithm with adapting weights, which also adopts an external archive to aid the adjustment of weights as MOEA/D-AWA does. It adds new weights in the undeveloped and promising area, which is defined by contrasting the evolving population with the external archive. After adding weights in the undeveloped and promising area, AdaW deletes the weight that performs worst among the weights that share the same solution. Weights are updated every 5% of Gmax, and remain unchanged in the last 10% generations.
 

Convergence metric

In MBRA, reference vector adjustment is triggered conditionally according to the convergence degree of subproblems, which is reflected through d1 distance, as PBI decomposition approach does, which is described below:
$$ \begin{gathered} {\rm Minimize}\quad \,g^{PBI} ({\mathbf{x}}|{\varvec{w}},{\mathbf{z}}^{\min } ) = d_{1} + \theta d_{2} \hfill \\ {\rm Subject}\;{\rm to}\quad {\mathbf{x}} \in \Omega \hfill \\ {\rm where}\quad \quad \;\,d_{1} = \frac{{\left\| {({\mathbf{F}}({\mathbf{x}}) - {\mathbf{z}}^{\min } )^{{\text{T}}} {\varvec{w}}} \right\|}}{{\left\| {\varvec{w}} \right\|}} \hfill \\ \quad \quad \quad \;\quad \;\;d_{2} = \left\| {{\mathbf{F}}({\mathbf{x}}) - ({\mathbf{z}}^{\min } + d_{1} {\varvec{w}})} \right\| \hfill \\ \end{gathered} $$
(2)
As Fig. 1 shows, d1 is the distance from the projection of F(x) on the reference vector to the ideal point, and d2 is the perpendicular distance from F(x) to its projection on the reference vector. These two distances can reflect the convergence and diversity degree of solutions, respectively. In MBRA, we also adopt d1 distance to reflect the convergence degree to determine whether to adjust reference vectors or not, which is described thoroughly in the next section.

Proposed algorithm

This section describes MBRA step by step. First, the framework of MBRA is introduced. Then, the main procedures of MBRA are elaborated on, including mating selection, environmental selection, and reference vector adjustment. Finally, the computational complexity is analysed.

Framework of MBRA

Algorithm 1 shows the framework of MBRA. First, initialize a population P containing N solutions and a reference vector set W containing N reference vectors. Then, compute the convergence metric (CM) to estimate the convergence degree of the subproblems through CalConvergence (Algorithm 3). Next, loop the following procedures until the maximum fitness evaluation FEmax is reached. Generate a mating pool \(P^{\prime}\) using mating selection described in Sect. "Mating selection based on Pareto dominance and the sum of objectives" and an offspring population \(P^{\prime\prime}\) through Variation. Then, select N solutions from the union population through EnvironmentalSelection (Algorithm 2). Finally, adjust reference vectors periodically and conditionally through AdjustVector (Algorithm 4).

Mating selection based on pareto dominance and the sum of objectives

To distinguish solutions, Pareto dominance is first used. However, it is hard for two solutions to be distinguished in MaOPs when the number of objectives is large. Therefore, other principles should be adopted to compare solutions other than Pareto dominance. In this paper, we choose the sum of objectives as the secondary principle in mating selection. A small value of the sum of objectives is preferable. Specifically, select two solutions randomly and put the non-dominated one into the mating pool. If they are non-dominated mutually, choose the one with the smaller value of the sum of objectives. If they are still indistinguishable, choose one randomly. Repeat until N solutions are selected.

Environmental selection

In MaOPs, only using Pareto dominance to select solutions will become ineffective because almost all solutions are non-dominated. Therefore, auxiliary strategies concerning with convergence should be considered. To this end, we adopt \(I_{\varepsilon + }\) [27] to estimate the convergence of solutions and sort the solutions belonging to the same subregion by the fitness computed according to \(I_{\varepsilon + }\). Besides, a max–min-angle strategy is adopted for diversity. In general, the environmental selection can be divided into two steps, i.e., step 1: compute fitness and sort solutions by fitness, and step 2: select solutions.
Step 1: Compute fitness and sort solutions by fitness
Before computing \(I_{\varepsilon + }\), normalize the objective vector F(xi) to \({\mathbf{F^{\prime}}}({\mathbf{x}}_{i} ) = (f^{\prime}_{1} ({\mathbf{x}}_{i} ), \ldots ,f^{\prime}_{m} ({\mathbf{x}}_{i} ))\) for each solution \({\mathbf{x}}_{i} \in Q\) according to the ideal point \({\mathbf{z}}^{\min } = (z_{1}^{\min } , \ldots ,z_{m}^{\min } )\) and the nadir point \({\mathbf{z}}^{\max } = (z_{1}^{\max } , \ldots ,z_{m}^{\max } )\) as follows (line 2 in Algorithm 2):
$$ f_{j}^{\prime } ({\mathbf{x}}_{i} ) = \frac{{f_{j} ({\mathbf{x}}_{i} ) - z_{j}^{\min } }}{{z_{j}^{\max } - z_{j}^{\min } }},j = 1, \ldots ,m $$
(3)
where \(z_{j}^{\min } = {\min }_{i = 1}^{|Q|} f_{j} ({\mathbf{x}}_{i} )\),\(z_{j}^{\max } = {\max }_{i = 1}^{|Q|} f_{j} ({\mathbf{x}}_{i} )\).
After normalization, compute \(I_{\varepsilon + }\) and solution fitness F(x) (line 3 in Algorithm 2). A big value of F(x) is preferable.
$$ \begin{gathered} I_{\varepsilon + } ({\mathbf{x}}_{1} ,{\mathbf{x}}_{2} ) = \min_{\varepsilon } \left( {f_{i} ({\mathbf{x}}_{1} ) - \varepsilon \le f_{i} ({\mathbf{x}}_{2} ),1 \le i \le m} \right) \hfill \\ F\left( {{\mathbf{x}}_{1} } \right) = \sum_{{{\mathbf{x}}_{2} \in Q\backslash \{ {\mathbf{x}}_{1} \} }} - e^{{ - I_{\varepsilon + } ({\mathbf{x}}_{2} ,{\mathbf{x}}_{1} )/0.05}} \hfill \\ \end{gathered} $$
(4)
Sequentially, associate each solution with its nearest reference vector in terms of angle distance computed using AngleDistance (lines 4–5 in Algorithm 2). The angle distance between each \({\mathbf{x}}_{i} \in Q\,(i = 1,2,...,2N)\) and each \({\mathbf{w}}_{j} \in W\,(j = 1,2,...,N)\) is computed as follows:
$$ {\mathbf{d}}{\rm (}Q{\rm ,}W{\rm )} = 1 - \cos ({\mathbf{x}}_{i} ,{\mathbf{w}}_{j} ) = 1 - \frac{{{\mathbf{F^{\prime}}}({\mathbf{x}}_{i} ){\mathbf{w}}_{j} }}{{|{\mathbf{F^{\prime}}}({\mathbf{x}}_{i} )|_{2} \cdot |{\mathbf{w}}_{j} |_{2} }} $$
(5)
where \( {\mathbf{F^{\prime}}}({\mathbf{x}}_{i} ){\mathbf{w}}_{j}\) computes the inner product between \({\mathbf{F^{\prime}}}({\mathbf{x}}_{i} )\) and \({\mathbf{w}}_{j}\), \(|{\mathbf{F^{\prime}}}({\mathbf{x}}_{i} )|_{2}\) and \(|{\mathbf{w}}_{j} |_{2}\) are L2-norm of \({\mathbf{F^{\prime}}}({\mathbf{x}}_{i} )\) and \({\mathbf{w}}_{j}\). A small angle distance means the solution and the reference vector are near in terms of angle. A solution \({\mathbf{x}}_{i}\) is associated with a reference vector wj if and only if \({\mathbf{x}}_{i}\) has the smallest angle distance with wj than with the other reference vectors in W.
After finding the nearest reference vector for each solution, we sort the solutions in the same subregion according to the computed fitness through FitSorting (line 6 in Algorithm 2). The solution with the biggest value of fitness ranks first, and the smallest ranks last. Thus, each solution obtains a rank number (denoted as FitNo in line 6 of Algorithm 2), representing its rank order in the corresponding subregion.
Step 2: Select solutions
After sorted in each subregion by their fitness, solutions can be selected first through non-dominated sorting and fitness-based sorting for convergence, and then through the max–min-angle strategy for diversity.
First, operate non-dominated sorting on Q and put solutions into corresponding nondomination fronts, thus each solution obtains a nondomination front number (denoted as NDNo in line 8 of Algorithm 2), and NDFi is the set of solutions whose NDNo =  i. Next, choose solutions front by front, and determine l which is the minimum value satisfying \(|NDF_{1} \cup ... \cup NDF_{l} | \ge N\) and let \(Q = [NDF_{1} ,...,NDF_{l} ]\) (line 9 in Algorithm 2). Then, if |Q|= N, let P = Q and return P (line 24 in Algorithm 2), otherwise continue selecting solutions according to FitNo obtained in Step 1. Specifically, determine k which is the minimum value satisfying \(|FitF_{1} \cup ... \cup FitF_{k} | \ge N\) and let \(Q = [FitF_{1} ,...,FitF_{k} ]\) (line 11 in Algorithm 2), where FitFi is the set of solutions whose FitNo = i. If |Q|= N, let P = Q (line 21 in Algorithm 2). Otherwise, first put the solutions in the first k-1 fronts into P and then select solutions from Q \ P using the max–min-angle strategy until |Q|= N (lines 14–19 in Algorithm 2).
In the max–min-angle strategy, compute the angle distance d (Q\P, P) between the remaining solution \({\mathbf{x}}_{q} \in Q\backslash P\) and the selected solution \({\mathbf{x}}_{p} \in P\) according to (5). Taking diversity into account, we prefer a far distance of the remaining solution from the selected solutions. That is to say, a remaining solution farthest from the selected solutions is selected. Loop this operation until |P| = N and return P.
Figure 2 illustrates the process of environmental selection. (a) shows 12 2-objective solutions in the union population and six reference vectors in W. (b) is the result of association, where the dashed lines starting from the origin are the angular bisectors between the two adjacent reference vectors. The clusters corresponding to w1 ~ w6 are \(\{ \emptyset \}\), \(\{ s_{1} ,s_{2} ,s_{3} \}\), \(\{ s_{4} ,s_{5} \}\), \(\{ s_{6} ,s_{7} \}\), \(\{ s_{8} ,s_{9} ,s_{10} \}\), and \(\{ s_{11} ,s_{12} \}\). Then, solutions in each subproblem is sorted by the computed fitness, and the result is shown in (c), where the number alongside the circle is its rank order, i.e., FitNo. Next, we select solutions first through non-dominated sorting and then through fitness-based sorting. In (d), s2 and s8 belong to the second nondomination front, and the other ten solutions belong to the first nondomination front. The capacity of the first nondomination front has exceeded six, therefore, s2 and s8 are deleted and now \(Q = \{ s_{1} ,s_{3} ,s_{4} ,s_{5} ,s_{6} ,s_{7} ,s_{9} ,s_{10} ,s_{11} ,s_{12} \}\). Similarly, k = 2 is determined. In (e), s3, s4, s6, s9, and s11 whose FitNo = 1 are first put into P. Thus, only one more solution is needed to be selected using the max–min-angle strategy. The angles between the remaining solutions and their nearest selected solution are \(\angle s_{1} s_{3}\), \(\angle s_{5} s_{4}\), \(\angle s_{7} s_{6}\), \(\angle s_{10} s_{11}\), and \(\angle s_{12} s_{11}\), among which \(\angle s_{1} s_{3}\) is the biggest. Therefore, s1 is selected and the final set \(P = \{ s_{1} ,s_{3} ,s_{4} ,s_{6} ,s_{9} ,s_{11} \}\).

Adjust reference vectors

Adjustment timing

In reference vector adjustment, the adjustment frequency is an important issue. The frequent adjustment of reference vectors is likely to damage the convergence of population because in such case, solutions evolve along the direction of the changing and unstable subproblems. On the contrary, the less frequent adjustment will waste some computational effort on unnecessary subproblems because their reference vectors do not intersect with the optimal PF. What’s more, if only considering the adjustment frequency, reference vectors will be adjusted as long as the period is reached, regardless of whether they need adjustment or not. If there is still room for subproblems to improve the convergence, reference vectors need not be adjusted. Ideally, reference vectors should be adjusted when the population is convergent on the whole. To this end, we set two conditions, Condition1 and Condition2, and employ improvement rate (IMR) to control reference vector adjustment. As shown in Algorithm 1, if Condition1 is true, compute IMR, and if Condition2 is true, adjust reference vectors.
At the beginning of the evolution, the population is in the process of exploration and it is inappropriate to operate adjustment. At the end of the evolution, if reference vectors can still be adjusted, there will not be enough time for subproblems to converge. Therefore, in this paper, reference vectors are not allowed to be adjusted in the first 20% generations and the last 10% generations. In the other generations, reference vectors can be adjusted at a frequency of fr. Thus, Condition1 can be stated as: the current generation (G) is in the range of 20% ~ 90% of Gmax, and G is the integral multiple of fr × Gmax. If Condition1 is true, compute IMR to reflect the relative improvement of the convergence degree of each subproblem.
$$ IMR = \frac{{CM - CM_{old} }}{{CM_{old} }} $$
(6)
where CM is the convergence metric of the current generation, and CMold is the convergence metric of the last computation. CM is computed using CalConvergence as shown in Algorithm 3. First, compute the angle distance d (W, P) between each reference vector in W and solutions in normalized P. Then, find the nearest solution of each reference vector and compute d1 distance of this solution on its associated vector. d1 distance on each reference vector constitutes CM.
Here, we explain why we associate each reference vector with its nearest solution (shorted as v2s), rather than associate each solution with its nearest reference vector (shorted as s2v) using Fig. 3.
In s2v, all solutions are associated with their nearest reference vector. If the number of associated solutions for one reference vector is more than one, we choose the solution with the smallest d1 distance as the final associated solution of this reference vector. Therefore, in Fig. 3(a), the convergence degree of subproblems w1, w2, w4, and w5 are represented by the d1 distance of a, b, d, and e (denoted using red line), respectively, and that of w3 and w6 cannot be represented because there is no solution associated with them. Although w3 and w6 are invalid reference vectors at present, there should be solutions that can estimate their convergence degree before they are adjusted. Taking this into account, we choose to associate each reference vector with its nearest solution, which can guarantee that each reference vector has one associated solution and its convergence degree can be computed, as shown in Fig. 3(b).
After IMR is computed using (6), convert it as follows:
$$ IMR = \left\{ {\begin{array}{*{20}c} { - 1,\quad IMR < - \alpha } \\ {1,\quad IMR > \alpha } \\ {0,\quad {\rm otherwise}} \\ \end{array} } \right. $$
(7)
where α is a parameter to determine whether a subproblem is convergent or not. IMR < -α indicates that the evolution of the corresponding subproblem is still in the process of convergence and there is still room for convergence improvement. Otherwise, it indicates that the evolution of the corresponding subproblem is in the process of diversity or stagnancy and there is no much room for convergence improvement under the current reference vector.
Sequentially, if Condition2 is true, adjust reference vectors using AdjustVector. Here, we define Condition2 as: the sum of IMR is larger than or equal to zero, which indicates that the evolution of subproblems has converged on the whole. An example in Fig. 4 illustrates how to determine whether to adjust reference vectors or not.
In Fig. 4, the solid circle and the red line represent solutions and their d1 distance on the corresponding reference vector in t1 generation, and the solid triangle and the blue line denote solutions and their d1 distance on the corresponding reference vector in t2 (t2 > t1) generation. Compute \(IMR = (CM_{{t_{2} }} - CM_{{t_{1} }} )/CM_{{t_{1} }}\) and compare it with α. Suppose α is 0.01 and IMR of each subproblem is smaller than − 0.01 here. Then, the converted IMR = [− 1, − 1, − 1, − 1, − 1, − 1]. The sum of IMR is -6, which indicates that there is no need for reference vector adjustment.

Delete and add reference vectors

If both Condition1 and Condition2 are true, adjust reference vectors using AdjustVector as shown in Algorithm 4. In the evolutionary process, some reference vectors can associate with more than one solution, which are called the crowded reference vectors, and some may associate with no solutions, which are called the invalid reference vectors. The main idea of adjustment in MBRA is to delete the invalid reference vectors and then add new ones in the subregion of the crowded reference vectors.
Step 1: Delete invalid reference vectors
First, normalize P. Then, associate solutions in P with its nearest reference vectors in W in terms of angle distance d (P, W) to obtain rho (line 4 in Algorithm 4). The reference vectors with rho = 0 are invalid reference vectors and are deleted from W.
Step 2: Add new reference vectors
This step adds new reference vectors to W one by one until the size of W returns to N. There are two considerations when adding new reference vectors. One consideration is that more computational effort should be allocated to the crowded reference vectors. Therefore, we add new reference vectors in the subregion of the crowded ones. The other consideration is that the newly added reference vector should maintain the diversity of W. Therefore, we generate new reference vectors using the solution farthest from the most crowded reference vector.
First, find the reference vector wmost which associates with the most solutions according to rho. If there are more than one reference vector satisfying this feature, choose one randomly. Then, determine the farthest solution xf in the subregion of wmost in terms of angle distance. Based on \({\mathbf{F}}({\mathbf{x}}_{f} ) = (f_{1} ({\mathbf{x}}_{f} ),f_{2} ({\mathbf{x}}_{f} ),...,f_{m} ({\mathbf{x}}_{f} ))\), generate the new reference vector wnew as follows, and add it into W.
$$ w_{new,i} = \frac{{f_{i} ({\mathbf{x}}_{f} ) - z_{i}^{\min } }}{{\sum\nolimits_{i = 1}^{m} {(f_{i} ({\mathbf{x}}_{f} ) - z_{i}^{\min } )} }},\quad i = 1,2,...,m $$
(8)
Next, reassociate solutions in P but xf with W according to the angle distance d (P\xf, W) and update rho in order to add the next reference vector until the size of W is N. At last, update CM according to the new reference vector set W.
Figure 5 illustrates the deletion and addition process of reference vectors. w1 ~ w6 are six reference vectors, a ~ f are six solutions in the population, and the disconnected curve is the optimal PF. In the left of Fig. 5, a is associated with w1, b is associated with w2, c and d are associated with w4, e and f are associated with w5, and no solution is associated with w3 and w6. Therefore, w3 and w6 are the invalid reference vectors and deleted from W, as shown in the middle of Fig. 5. Next, two new reference vectors should be added into W. Because both w4 and w5 have two associated solutions, we first choose one randomly, e.g., w4, to add the new reference vector. In the upper right of Fig. 5, c is chosen to generate the first new reference vector \(w_{3}^{\prime }\) because c is farther from w4 than d. In the lower right of Fig. 5, reassociate {a, b, d, e, f} with \(\{ w_{1} ,w_{2} ,w_{3}^{\prime } ,w_{4} ,w_{5} \}\), and w5 becomes the crowded reference vector because it has two associated solutions. Then, f is chosen to generate the second new reference vector \(w_{6}^{\prime }\) because f is farther from w5 than e. Finally, a new reference vector set \(W = \{ w_{1} ,w_{2} ,w_{3}^{\prime } ,w_{4} ,w_{5} ,w_{6}^{\prime } \}\) is obtained.
Here, we clarify the differences between MBRA and MOEA/D-AWA, as concluded in Table 1. The process of MOEA/D-AWA described in Sect. "Related work" reveals that other than the delete-and-add framework, MBRA is quite different from MOEA/D-AWA. In MBRA, the reference vectors are adjusted according to their association relation with solutions, and by virtue of two types of reference vectors, i.e., the invalid reference vectors, which have no associated solutions, and the crowded reference vectors, which have more than one associated solution. The invalid reference vector indicates that there is no solution evolving along the direction of this vector, and the crowded reference vector indicates that this subregion has more non-dominated solutions and needs more reference vectors. Therefore, we delete the invalid reference vectors and generate new ones in the subregion of the most crowded reference vector according to the farthest solution from this vector. Therefore, MBRA can save computing resources in the invalid reference vectors and assign more reference vectors in the region near the crowded reference vectors. Besides, using the farthest associated solution from the most crowded reference vector guarantees the diversity of the reference vector set. The omission of external archive in MBRA also saves computing time compared with MOEA/D-AWA.
Table 1
Differences between MBRA and MOEA/D-AWA
Difference
MBRA
MOEA/D-AWA
When to adjust
Based on the proposed metric
At a fixed frequency
Where to delete
Invalid reference vectors
Overcrowded reference vectors
Where to add
Crowded reference vectors
Sparse reference vectors
Sparsity metric
The number of associated solutions
Vicinity distance
External archive
Do not need
Need

Computational complexity analysis

The main computational cost of MBRA comes from three parts, i.e., offspring generation, environmental selection, and reference vector adjustment, which can be analysed as shown in Table 2, where D, m, and N denote the number of decision variables, objectives, and solutions in population, respectively. According to the analysis, the complexity of offspring generation, environmental selection, and reference vector adjustment are all O (mN2). Therefore, the final computational complexity of MBRA is O (mN2).
Table 2
Computational complexity of procedures in MBRA
Part
Procedure
Complexity
Offspring generation
Mating selection
O (mN2)
Variation (SBX and PM)
O (DN)
Environmental selection (Algorithm 2)
Step 1
Objective normalization
O (mN)
Compute fitness
O (mN2)
Associate Q with W
O (mN2)
Fitness-based sorting
O (N2)
Step 2
Non-dominated sorting
O (mN2)
Max–min-angle selection
O (mN2)
Adjust vector (Algorithm 4)
Deletion
Objective normalization
O (mN)
Associate P with W
O (mN2)
Addition
Generate new vectors
O (m)
Reassociate P\xf with W
O (mN2)
Update CM using Algorithm 3
O (mN2)

Experimental setup

This section introduces the experimental setups, including compared algorithms, test problems, performance indicators, and parameter settings. All experiments run on PlatEMO [58].

Compared algorithms

To validate the performance of MBRA, we choose seven classic compared algorithms, i.e., MOEA/D [34], NSGA-III [40], RVEA [49], MOMBI-II [25], A-NSGA-III [47], RVEAa [49], and MOEA/D-AWA [37]. The first four algorithms are the type without reference vector adjustment, among which MOEA/D is decomposition-based, NSGA-III combines Pareto dominance and reference vectors, RVEA is reference vector-based, and MOMBI-II is indicator-based. The last three algorithms are the type with reference vector adjustment. Besides, four state-of-the-art algorithms are also included, which are RPEA [61], SPEA/R [41], MaOEA-IGD [23], and MaOEA-IT [62]. The first two algorithms are the type of algorithms with reference points/directions, and the last two algorithms are newly proposed based on indicator and two-stage strategy, respectively. The comparison of MBRA with these classic and new algorithms can demonstrate the performance of MBRA persuasively.

Test problems

In the empirical study, we choose 27 test problems, namely DTLZ1-DTLZ7, IDTLZ1-IDTLZ2, WFG1-WFG9, and MaF1-MaF9, each of which is tested on 3, 5, 8, 10, and 15 objectives. Among these test problems, DTLZ1-DTLZ4 and WFG4-WFG9 are the type with regular PFs. The other test problems are the type with irregular PFs, and we further divide them into two types for convenience of discussion. DTLZ5-DTLZ7, IDTLZ1-IDTLZ2, and WFG1-WFG3 are discussed together, called irregular problems. MaF test suite is discussed separately. The features of the test problems with irregular PFs are concluded in Table 3. They cover different types of irregular PF shapes, such as disconnected, degenerate, or inverted. Note that for DTLZ1, D is m + 4, for MaF7, D is m + 19, for MaF8 and MaF9, D is 2, and for the others, D is m + 9.
Table 3
Features of problems with irregular PF
Features
Test problems
Linear
IDTLZ1, WFG3, MaF1
Convex
IDTLZ2, MaF3, MaF4
Concave
DTLZ5, DTLZ6, MaF2, MaF5, MaF6
Mixed
DTLZ7, WFG1, MaF7
Degenerate
DTLZ5, DTLZ6, WFG3, MaF6
Disconnected
DTLZ7, MaF7
Inverted
IDTLZ1, IDTLZ2, MaF1, MaF4
Multi-modal
DTLZ7, WFG2, MaF3, MaF4, MaF7
Biased
WFG1, MaF5
Non-separable
WFG2, WFG3

Performance indicators

IGD and HV are adopted to evaluate the performance of algorithms. GD and Spread are adopted to help analyse the performance of convergence or diversity. IGD and HV are comprehensive indicators that can evaluate the convergence and diversity together. GD is a convergence indicator and a small value indicates that the obtained approximation set is close to the optimal PF. Spread is a diversity indicator and Spread = 0 indicates all Pareto optimal solutions are equidistantly spaced. For HV, the larger the value, the better the performance of algorithms, and for IGD, GD, and Spread, the smaller the value, the better the performance of algorithms.

Parameter settings

(1) Population size and termination condition. For decomposition-based MOEAs, N is the same as the number of reference vectors, which is determined using Das and Dennis’s systematic method. In this paper, when m = 3, (H1, H2) = (13, 0), when m = 5, (H1, H2) = (10, 0), when m = 8 and 10, (H1, H2) = (3, 2), and when m = 15, (H1, H2) = (2, 1), where H1 and H2 are the number of divisions in each objective axis of the outside layer and the inside layer, respectively. For non-decomposition-based MOEAs, their population size is the same as the decomposition-based counterparts for fair comparison. Therefore, the population size N, corresponding to 3, 5, 8, 10, and 15 objectives are 105, 210, 156, 275, and 135, respectively. In this paper, the maximum fitness evaluations (FEmax) concluded in Table 4 is adopted as the termination condition, which is set according to some references [59, 60].
Table 4
Termination condition (FEmax) of test problems
m
3
5
8
10
15
DTLZ1-DTLZ4
21,000
42,000
31,200
55,000
27,000
DTLZ5-DTLZ7
IDTLZ1-IDTLZ2
52,500
105,000
78,000
137,500
67,500
WFG1-WFG3
52,500
105,000
156,000
275,000
202,500
WFG4-WFG9
31,500
63,000
78,000
137,500
135,000
MaF1
20,000
60,000
80,000
100,000
150,000
MaF2
20,000
40,000
60,000
70,000
100,000
MaF3
60,000
80,000
120,000
120,000
150,000
MaF4
60,000
80,000
150,000
150,000
200,000
MaF5
10,000
20,000
20,000
20,000
40,000
MaF6
20,000
40,000
40,000
40,000
60,000
MaF7
30,000
40,000
60,000
70,000
100,000
MaF8
60,000
80,000
80,000
80,000
100,000
MaF9
60,000
60,000
60,000
60,000
100,000
(2) Parameter settings for algorithms. For all the compared algorithms, we use SBX and PM to generate offspring solutions. The crossover probability is 1 and the mutation probability is 1/D. The distribution indexes of SBX and PM are both 20. In MOEA/D, PBI approach is used as the aggregation function. In MBRA, α, the parameter to determine whether a subproblem is convergent or not, is 0.01, and its sensitivity will be analysed in Sect. 5.3. The frequency of reference vector adjustment fr is 0.1. The parameters in other algorithms are the same as their original literatures.
(3) Statistical analysis. All test problems run 30 times on each algorithm independently. To determine if there is a significant difference in performance between the two algorithms from the perspective of statistics, the Wilcoxon rank sum test is used with a significance level of 0.05.

Results and discussions

First, we discuss the experimental results from two aspects, including the Wilcoxon rank sum test and the number of objectives. Then, we study the effect of metric-based adjustment timing strategy. Finally, we do sensitivity analysis of α.

Analyses of test problems with compared algorithms

In terms of the Wilcoxon rank sum test

Only the final result of the Wilcoxon rank sum test is shown in the text, and the complete raw data is displayed in the supplementary file. The symbols of + , −, and = indicate that the compared algorithm is significantly better than, worse than, and comparable to MBRA from a statistical perspective. If the number of “ + ” is less than that of “−”, the result is marked in bold, which indicates that MBRA is better than the compared algorithm.
Table 5 summarizes the result of Wilcoxon rank sum test on regular test problems. In terms of IGD, MBRA is significantly better than MOEA/D, MOMBI-II, A-NSGA-III, MOEA/D-AWA, RPEA, SPEA/R, MaOEA-IGD, and MaOEA-IT, and worse than NSGA-III, RVEA, and RVEAa. In terms of HV, MBRA is significantly better than MOEA/D, MOMBI-II, A-NSGA-III, RVEAa, RPEA, MaOEA-IGD, and MaOEA-IT, and worse than NSGA-III, RVEA, MOEA/D-AWA, and SPEA/R. As GD shows, MBRA is beaten by eight compared algorithms on convergence other than RVEAa, RPEA, and MaOEA-IT. The poor performance of MBRA on convergence when solving regular problems is likely due to the reference vector adjustment that needn’t have done. Besides, the comparison of GD and Spread shows that these two indicators are contradictory on MOEA/D, MOMBI-II, A-NSGA-III, RVEAa, MOEA/D-AWA, SPEA/R, and MaOEA-IGD. That is to say, when GD of one algorithm is better than MBRA, its Spread is worse than MBRA, and vice versa. However, MBRA is better than these five algorithms on IGD, or HV, or both of them. On all the four indicators, NSGA-III and RVEA are superior to MBRA, which indicates that NSGA-III and RVEA can solve regular problems well. This can be explained that the reference vectors in these two algorithms distribute uniformly, which fits the PFs of regular test problems well.
Table 5
Result of Wilcoxon rank sum test on regular test problems
Algorithms
vs. MBRA
+/  −/ = 
IGD
HV
GD
Spread
MOEA/D
13/35/2
12/34/4
33/11/6
20/29/1
NSGA-III
23/20/7
34/6/10
32/9/9
33/13/4
RVEA
31/14/5
34/7/9
33/12/5
35/8/7
MOMBI-II
12/32/6
17/29/4
37/10/3
14/36/0
A-NSGA-III
7/31/12
16/24/10
26/13/11
16/32/2
RVEAa
26/20/4
15/26/9
21/23/6
35/12/3
MOEA/D-AWA
20/25/5
27/16/7
33/10/7
7/38/5
RPEA
6/38/6
6/34/10
20/22/8
7/39/4
SPEA/R
18/26/6
29/11/10
29/19/2
17/25/8
MaOEA-IGD
3/40/7
2/40/8
35/11/4
13/30/7
MaOEA-IT
0/50/0
0/46/4
0/49/1
8/40/2
The result is marked in bold, which indicates that MBRA is better than the compared algorithm
Table 6 displays the result of Wilcoxon rank sum test on irregular test problems. In terms of IGD, MBRA is significantly better than MOEA/D, RVEA, MOMBI-II, MOEA/D-AWA, SPEA/R, MaOEA-IGD, and MaOEA-IT, worse than RVEAa and RPEA, and comparable to NSGA-III and A-NSGA-III. In terms of HV, MBRA is significantly better than MOEA/D, RVEA, RPEA, SPEA/R, MaOEA-IGD, and MaOEA-IT, worse than NSGA-III, MOMBI-II, A-NSGA-III, and MOEA/D-AWA, and comparable to RVEAa. The comparison of GD and Spread shows that these two indicators are contradictory on MOEA/D, NSGA-III, RVEA, MOMBI-II, A-NSGA-III, RPEA, and MaOEA-IGD. For RVEAa, its GD and Spread are both better than MBRA, and for MOEA/D-AWA, SPEA/R, and MaOEA-IT, their GD and Spread are both worse than MBRA.
Table 6
Result of Wilcoxon rank sum test on irregular test problems
Algorithms
vs. MBRA
  + /  −/ = 
IGD
HV
GD
Spread
MOEA/D
12/26/2
10/27/3
26/13/1
5/32/3
NSGA-III
18/18/4
20/12/8
10/26/4
20/15/5
RVEA
12/21/7
13/21/6
11/29/0
21/14/5
MOMBI-II
6/27/7
19/15/6
25/13/2
3/33/4
A-NSGA-III
18/18/4
20/13/7
7/25/8
23/14/3
RVEAa
19/13/8
17/17/6
20/14/6
27/10/3
MOEA/D-AWA
11/21/8
22/13/5
14/21/5
7/32/1
RPEA
20/18/2
18/19/3
23/11/6
6/28/6
SPEA/R
7/25/8
15/19/6
6/32/2
10/26/4
MaOEA-IGD
1/34/5
5/27/8
24/10/6
3/31/5
MaOEA-IT
0/39/1
1/36/3
7/30/3
9/24/7
The result is marked in bold, which indicates that MBRA is better than the compared algorithm
Table 7 demonstrates the result of Wilcoxon rank sum test on MaF test problems. MBRA is significantly better than all the compared algorithms with 60%, 60%, 69%, 76%, 56%, 47%, 56%, 53%, 82%, 93%, 93% better IGD, and 69%, 51%, 76%, 62%, 49%, 62%, 60%, 67%, 87%, 98%, 96% better HV out of 45 values. It also obtains 17 best IGD and 12 best HV, both of which rank first among all the compared algorithms, indicating that MBRA performs best on MaF test problems and has the capability to tackle problems with different PFs. Combining Tables 5, 6, and 7, it can be seen that GD of MOEA/D, MOMBI-II, and MaOEA-IGD are always better than MBRA on all test problems, which can be attributed to the aggregation function and the indicator-based environmental selection, respectively. Additionally, Spread of RVEAa is always better than MBRA on all test problems. The good diversity of RVEAa can be attributed to its reference vector adaption and regeneration strategy.
Table 7
Result of Wilcoxon rank sum test on MaF test problems
Algorithms
vs. MBRA
+ /  −/ = 
IGD
HV
GD
Spread
MOEA/D
11/27/7
8/31/6
30/10/5
11/27/7
NSGA-III
14/27/4
17/23/5
8/32/5
17/21/7
RVEA
10/31/4
6/34/5
7/32/6
18/21/6
MOMBI-II
9/34/2
15/28/2
25/10/10
13/29/3
A-NSGA-III
17/25/3
17/22/6
6/34/5
18/22/5
RVEAa
17/21/7
12/28/5
9/28/8
31/11/3
MOEA/D-AWA
17/25/3
12/27/6
17/20/8
7/34/4
RPEA
12/24/9
8/30/7
18/16/11
9/29/7
SPEA/R
6/37/2
3/39/3
6/37/2
5/32/8
MaOEA-IGD
0/42/3
0/44/1
23/16/6
8/33/4
MaOEA-IT
2/42/1
1/43/1
7/34/4
10/31/4
The result is marked in bold, which indicates that MBRA is better than the compared algorithm
We choose three 10-objective test problems, i.e., MaF2, MaF5, and MaF7, their optimal PFs shown in Fig. 6, to visualize the convergence and diversity of the approximation set obtained by each algorithm, shown in Figs. 7, 8, and 9 respectively. MaF2 is a variant of DTLZ2, MaF5 is scaled DTLZ4, and MaF7 is disconnected. In the three figures, the PFs obtained by MBRA are quite similar with the optimal PFs, revealing the competitive performance of the proposed algorithm.

In terms of the number of objectives

To discuss the performance of MBRA on different number of objectives, we obtain the average rank of each algorithm on 3-, 5-, 8-, 10-, and 15-objective test problems after sorting each algorithm based on IGD and HV on each test problem, as depicted in Fig. 10. A smaller rank value means a better performance.
On the whole, MBRA performs worst on 3-objective test problems among 3-, 5-, 8-, 10-, and 15-objective, in terms of both IGD and HV, which indicates that MBRA can tackle MaOPs well while performs poorly on MOPs. For the other four number of objectives, in terms of IGD, on regular test problems, MBRA ranks first on 10- and 15-objective, followed by 5-objective and then 8-objective. On irregular and MaF test problems, MBRA ranks first on 8- and 10-objective, better than 5- and 15-objective. In terms of HV, on regular test problems, the best rank of MBRA is on 8- and 10-objective. On irregular problems, MBRA ranks first on 5-objective, followed by 8-objective and then 10-objective. On MaF test problems, MBRA ranks first on 5-, 8-, 10-, and 15-objective, which indicates that MBRA is the best among all the algorithms on MaF test problems.

State-of-the-art algorithms with reference vector adjustment on MaF test problems

In addition to the abovementioned compared algorithms, we choose another three recently proposed algorithms with reference vector adjustment, i.e., AdaW [38], DEA-GNG [54], and RVEA-iGNG [55], to study the performance of MBRA through tests on MaF problems. AdaW has been described in Sect. "Related work". DEA-GNG and RVEA-iGNG are two algorithms combined with a machine learning technique, i.e., growing neural gas network. DEA-GNG combines the nodes in the network, which are adopted as reference vectors, with another set of uniformly distributed reference vectors to guide the evolution, searching for Pareto optimal solutions according to the PF shape. RVEA-iGNG uses an improved GNG to learn the topology of the PFs with the solutions generated during a period of the search process as the training data, and it uses the solutions in the current population, in a number of previous populations, and in an archive to delete and add nodes. The experimental setup is identical to the abovementioned, and the Wilcoxon rank sum test regarding HV is summarized in Table 8 (raw data in supplementary file), where + , −, and = indicate that the compared algorithm is significantly better than, worse than, and comparable to MBRA with a significance of 0.05, and the last column is the average of all ranks on 45 MaF test problems.
Table 8
Comparison with AdaW, DEA-GNG, and RVEA-iGNG on MaF test problems regarding HV
Algorithms
 + 
 = 
Rank
AdaW
18
24
3
3.07 (4)
DEA-GNG
15
19
11
2.73 (3)
RVEA-iGNG
30
11
4
1.69 (1)
MBRA
   
2.51 (2)
Table 8 reveals that on MaF test problems, MBRA falls behind RVEA-iGNG, and is better than AdaW and DEA-GNG with a moderate superiority. Although the three compared algorithms have been shown effective in their original literatures, the computational cost is quite expensive. The comparison between MBRA and the three algorithms regarding runtime is displayed in Fig. 11, which shows that AdaW, DEA-GNG and RVEA-iGNG need much more time than MBRA, as a sacrifice for their good performance on convergence and diversity.

Effect of metric-based adjustment timing strategy

To validate the effect of metric-based adjustment timing strategy, first, we compare MBRA with different adjustment timing strategies, and then, transfer the metric-based adjustment timing strategy to A-NSGA-III.

Comparison with different adjustment timing strategies

In MBRA, the adjustment timing of reference vectors is determined using IMR. Here, we compare this strategy with three variants of MBRA, including MBRA1, where reference vectors are not adjusted in the evolutionary process, MBRA2, where reference vectors are adjusted in each generation during 20% ~ 90% of Gmax, and MBRA3, where reference vectors are adjusted at a frequency of fr. The other parameters and settings of MBRA1, MBRA2, and MBRA3 are the same as MBRA. The result of the Wilcoxon rank sum test is summarized in Table 9, where ± / = means that the corresponding variant algorithm is significantly better than, worse than, or comparable to MBRA. The average rank of MBRA, MBRA1, MBRA2, and MBRA3 on four indicators is shown in Fig. 12.
Table 9
Result of Wilcoxon rank sum test of different adjustment timing strategies
Problems
Indicators
+ /  − / =  vs. MBRA)
MBRA1
MBRA2
MBRA3
Regular
IGD
18/11/21
12/28/10
7/18/25
HV
20/0/30
0/41/9
1/18/31
GD
18/2/30
6/32/12
4/16/30
Spread
15/6/29
23/19/8
8/10/32
Irregular
IGD
12/17/11
9/18/13
6/12/22
HV
4/19/17
11/11/18
4/6/30
GD
11/15/14
12/13/15
6/7/27
Spread
17/14/9
4/24/12
6/10/24
MaF
IGD
21/19/5
16/16/13
5/11/29
HV
14/19/12
11/18/16
4/9/32
GD
13/18/14
7/12/26
2/8/35
Spread
17/17/11
11/14/20
1/10/34
As shown in Table 9, on regular test problems, MBRA1 is significantly better than MBRA no matter in terms of which indicator. However, on irregular and MaF test problems, MBRA is significantly better than MBRA1 when evaluated using IGD and HV, except IGD on MaF, where MBRA is slightly worse than MBRA1. Compared with MBRA2, MBRA is significantly better, except Spread on regular, where MBRA2 is better than MBRA, and HV on irregular and IGD on MaF, where MBRA and MBRA2 are comparable. Compared with MBRA3, MBRA is significantly better on all test problems and all indicators. In conclusion, MBRA is better than MBRA1 (although worse on regular test problems), and significantly better than MBRA2 and MBRA3, which validates the effect of IMR-based adjustment timing strategy in MBRA. The average rank of MBRA, MBRA1, MBRA2, and MBRA3 in Fig. 12 shows the same conclusion as what is concluded using the Wilcoxon rank sum test.

A-NSGA-III combined with IMR-based adjustment timing strategy

In A-NSGA-III, reference vectors are adjusted in each generation, which is in favour of optimization problems with irregular PFs while does harm to problems with regular PFs. Here, we replace the adjustment timing strategy in A-NSGA-III with the IMR-based adjustment timing strategy, which adjusts reference vectors periodically and conditionally, to see whether this variant (called IMR-NSGA-III) can improve the performance of A-NSGA-III or not. The experimental result is shown in Table 10, where ± / = means that IMR-NSGA-III is significantly better than, worse than, or comparable to A-NSGA-III, and Best means the number of the best results IMR-NSGA-III or A-NSGA-III obtains. On regular test problems, IMR-NSGA-III is better than A-NSGA-III on all indicators and obtains more best results. On irregular test problems, IMR-NSGA-III obtains more or comparable best results in terms of IGD, HV, and GD, although it is slightly worse than A-NSGA-III in terms of the Wilcoxon rank sum test. On MaF test problems, IMR-NSGA-III improves the convergence (evaluated by GD), while worsens the diversity (evaluated by Spread) of A-NSGA-III, resulting in the final deterioration of A-NSGA-III when evaluated using IGD or HV. In conclusion, IMR-based adjustment timing strategy can improve the overall performance of A-NSGA-III on regular test problems, obtain more best result on irregular test problems, and promote the convergence of A-NSGA-III on MaF test problems.
Table 10
Comparison between IMR-NSGA-III and A-NSGA-III
Problems
Indicators
+ /  − / = 
Best
IMR-NSGA-III
A-NSGA-III
Regular
IGD
17/2/31
32
18
HV
16/3/31
30
20
GD
11/5/34
31
19
Spread
18/0/32
36
14
Irregular
IGD
0/3/37
25
15
HV
5/6/29
24
16
GD
3/5/32
20
20
Spread
0/5/35
14
26
MaF
IGD
2/5/38
20
25
HV
1/4/40
17
28
GD
4/1/40
29
16
Spread
1/5/39
19
26

Sensitivity analysis of α

In (7), α is used to determine whether the evolution of a single subproblem has converged or not. If IMR is smaller than -α, the subproblem is considered not convergent, and convergence can still be improved. Here, we make sensitivity analysis of α to study its influence on MBRA. α = 0.01, 0.02, 0.05, 0.07, 0.1, 0.15, 0.2, 0.3, and 0.4, and the other experimental setup remains the same as Sect. "Experimental setup". As shown in Fig. 13, α = 0.01 is the best on regular test problems. In terms of IGD, α = 0.01 ranks third on irregular and MaF test problems, and in terms of HV, α = 0.01 ranks fifth and second on irregular and MaF test problems, respectively. Overall, α = 0.01 is an appropriate choice that can perform well on all the three test suites.

Conclusions

In this paper, we propose a many-objective evolutionary algorithm with metric-based reference vector adjustment named MBRA. To make the adjustment more effective and reduce the unnecessary adjustment, we use IMR computed periodically based on d1 distance to determine whether to adjust reference vectors or not. Only when the subproblems are considered convergent on the whole, are reference vectors allowed to be adjusted. In MBRA, we delete the invalid reference vectors associated with no solutions and generate new reference vectors according to the farthest solution in the subregion of the most crowded reference vector associated with the greatest number of solutions. Besides, in mating selection, the sum of objectives, along with Pareto dominance, is used to generate a mating pool. In environmental selection, fitness-based sorting that sorts solutions based on their fitness computed through a convergence indicator \(I_{\varepsilon + }\), along with non-dominated sorting, is used to select solutions.
To test the performance of MBRA, extensive empirical experiments are conducted, including the comparison with classic and new algorithms, the effect analysis of metric-based adjustment timing strategy, and the sensitivity analysis of α. Although MBRA is validated effective in the paper, it still has some limitations, including poor performance on some regular test problems such as DTLZ1-DTLZ3, and on some features of PF such as degenerate and multi-modal. The performance of MBRA on 3-objective problems is also not as good as that on many-objective problems. The future research can be considered from the following aspects. First, find other ways to determine whether to adjust reference vectors or not. Second, design algorithms that can solve regular and irregular problems simultaneously. Third, we will apply our method to solve constrained MOPs and real-world problems.

Declarations

Conflict of interest

The authors declare that there is no conflict of interest in this paper.

Ethical approval

This paper does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Supplementary Information

Below is the link to the electronic supplementary material.
Literature
1.
go back to reference Shim VA, Tan KC, Cheong CY (2012) A hybrid estimation of distribution algorithm with decomposition for solving the multiobjective multiple traveling salesman problem. IEEE Trans Syst Man Cybern C Appl Rev 42(5):682–691 Shim VA, Tan KC, Cheong CY (2012) A hybrid estimation of distribution algorithm with decomposition for solving the multiobjective multiple traveling salesman problem. IEEE Trans Syst Man Cybern C Appl Rev 42(5):682–691
2.
go back to reference Cheng R, Rodemann T, Fischer M, Olhofer M, Jin Y (2017) Evolutionary many-objective optimization of hybrid electric vehicle control: From general optimization to preference articulation. IEEE Trans Emerg Topics Comput Intell 1(2):97–111 Cheng R, Rodemann T, Fischer M, Olhofer M, Jin Y (2017) Evolutionary many-objective optimization of hybrid electric vehicle control: From general optimization to preference articulation. IEEE Trans Emerg Topics Comput Intell 1(2):97–111
3.
go back to reference Ye X, Liu S, Yin Y, Jin Y (2017) User-oriented many-objective cloud work-flow scheduling based on an improved knee point driven evolutionary algorithm. Knowledge-Based Syst 135:113–124 Ye X, Liu S, Yin Y, Jin Y (2017) User-oriented many-objective cloud work-flow scheduling based on an improved knee point driven evolutionary algorithm. Knowledge-Based Syst 135:113–124
4.
go back to reference Zhang X, Zhou K, Pan H, Zhang L, Zeng X, Jin Y (2020) A network reduction-based multiobjective evolutionary algorithm for community detection in large-scale complex networks. IEEE Trans Cybern 50(2):703–716PubMed Zhang X, Zhou K, Pan H, Zhang L, Zeng X, Jin Y (2020) A network reduction-based multiobjective evolutionary algorithm for community detection in large-scale complex networks. IEEE Trans Cybern 50(2):703–716PubMed
5.
go back to reference Zhou A, Qu B, Li H, Zhao S, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol Comput 1(1):32–49 Zhou A, Qu B, Li H, Zhao S, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol Comput 1(1):32–49
6.
go back to reference K. Hussain, M. N. Mohd Salleh, S. Cheng, Y. Shi, Metaheuristic research: A comprehensive survey, Artif. Intell. Rev. 52 (2019) 2191–2233. K. Hussain, M. N. Mohd Salleh, S. Cheng, Y. Shi, Metaheuristic research: A comprehensive survey, Artif. Intell. Rev. 52 (2019) 2191–2233.
7.
go back to reference Li B, Li J, Tang K, Yao X (2015) Many-objective evolutionary algorithms: a survey. ACM Comput Surv 48(1):1–35ADSMathSciNet Li B, Li J, Tang K, Yao X (2015) Many-objective evolutionary algorithms: a survey. ACM Comput Surv 48(1):1–35ADSMathSciNet
8.
go back to reference Laumanns M, Thiele L, Deb K, Zitzler E (2002) Combining convergence and diversity in evolutionary multiobjective optimization. Evol Comput 10(3):263–282PubMed Laumanns M, Thiele L, Deb K, Zitzler E (2002) Combining convergence and diversity in evolutionary multiobjective optimization. Evol Comput 10(3):263–282PubMed
9.
go back to reference Pierro FD, Khu ST, Savic DA (2007) An investigation on preference order ranking scheme for multiobjective evolutionary optimization. IEEE Trans Evol Comput 11(1):17–45 Pierro FD, Khu ST, Savic DA (2007) An investigation on preference order ranking scheme for multiobjective evolutionary optimization. IEEE Trans Evol Comput 11(1):17–45
10.
go back to reference Zou J, Yang Q, Yang S, Zheng J (2020) Ra-dominance: a new dominance relationship for preference-based evolutionary multiobjective optimization. Appl Soft Comput 90(1):106192 Zou J, Yang Q, Yang S, Zheng J (2020) Ra-dominance: a new dominance relationship for preference-based evolutionary multiobjective optimization. Appl Soft Comput 90(1):106192
11.
go back to reference Zou X, Chen Y, Liu M, Kang L (2008) A new evolutionary algorithm for solving many-objective optimization problems. IEEE Trans Syst Man Cybern B Cybern 38(5):1402–1412 Zou X, Chen Y, Liu M, Kang L (2008) A new evolutionary algorithm for solving many-objective optimization problems. IEEE Trans Syst Man Cybern B Cybern 38(5):1402–1412
12.
go back to reference He Z, Yen GG, Zhang J (2014) Fuzzy-based pareto optimality for many-objective evolutionary algorithms. IEEE Trans Evol Comput 18(2):269–285 He Z, Yen GG, Zhang J (2014) Fuzzy-based pareto optimality for many-objective evolutionary algorithms. IEEE Trans Evol Comput 18(2):269–285
13.
go back to reference Das SS, Islam MM, Arafat NA (2019) Evolutionary algorithm using adaptive fuzzy dominance and reference point for many-objective optimization, Swarm. Evol Comput 44:1092–1107 Das SS, Islam MM, Arafat NA (2019) Evolutionary algorithm using adaptive fuzzy dominance and reference point for many-objective optimization, Swarm. Evol Comput 44:1092–1107
14.
go back to reference Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 17(5):721–736 Yang S, Li M, Liu X, Zheng J (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 17(5):721–736
15.
go back to reference Li L, Li G, Chang L (2020) A many-objective particle swarm optimization with grid dominance ranking and clustering. Appl Soft Comput 96:106661 Li L, Li G, Chang L (2020) A many-objective particle swarm optimization with grid dominance ranking and clustering. Appl Soft Comput 96:106661
16.
go back to reference Zhu C, Xu L, Goodman ED (2016) Generalization of Pareto-optimality for many-objective evolutionary optimization. IEEE Trans Evol Comput 20(2):299–315 Zhu C, Xu L, Goodman ED (2016) Generalization of Pareto-optimality for many-objective evolutionary optimization. IEEE Trans Evol Comput 20(2):299–315
17.
go back to reference Zhu S, Xu L, Goodman ED, Lu Z (2022) A new many-objective evolutionary algorithm based on generalized pareto dominance. IEEE Trans Cybern 52(8):7776–7790PubMed Zhu S, Xu L, Goodman ED, Lu Z (2022) A new many-objective evolutionary algorithm based on generalized pareto dominance. IEEE Trans Cybern 52(8):7776–7790PubMed
18.
go back to reference Li M, Yang S, Liu X (2014) Shift-based density estimation for pareto-based algorithms in many-objective optimization. IEEE Trans Evol Comput 18(3):348–365 Li M, Yang S, Liu X (2014) Shift-based density estimation for pareto-based algorithms in many-objective optimization. IEEE Trans Evol Comput 18(3):348–365
19.
go back to reference Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197 Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
20.
go back to reference Elarbi M, Bechikh S, Gupta A, Ben Said L, Ong Y-S (2018) A new decomposition-based NSGA-II for many-objective optimization. IEEE Trans Syst Man Cybern Syst 48(7):1191–1210 Elarbi M, Bechikh S, Gupta A, Ben Said L, Ong Y-S (2018) A new decomposition-based NSGA-II for many-objective optimization. IEEE Trans Syst Man Cybern Syst 48(7):1191–1210
21.
go back to reference Tian Y, Cheng R, Zhang X, Su Y, Jin Y (2019) A strengthened dominance relation considering convergence and diversity for evolutionary many-objective optimization. IEEE Trans Evol Comput 23(2):331–345 Tian Y, Cheng R, Zhang X, Su Y, Jin Y (2019) A strengthened dominance relation considering convergence and diversity for evolutionary many-objective optimization. IEEE Trans Evol Comput 23(2):331–345
22.
go back to reference Shen J, Wang P, Wang X (2022) A controlled strengthened dominance relation for evolutionary many-objective optimization. IEEE Trans Cybern 52(5):3645–3657PubMed Shen J, Wang P, Wang X (2022) A controlled strengthened dominance relation for evolutionary many-objective optimization. IEEE Trans Cybern 52(5):3645–3657PubMed
23.
go back to reference Sun Y, Yen GG, Zhang Y (2019) IGD indicator-based evolutionary algorithm for many-objective optimization problems. IEEE Trans Evol Comput 23(2):173–187 Sun Y, Yen GG, Zhang Y (2019) IGD indicator-based evolutionary algorithm for many-objective optimization problems. IEEE Trans Evol Comput 23(2):173–187
24.
go back to reference Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76PubMed Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76PubMed
25.
go back to reference R. H. Gómez, C. A. Coello Coello, Improved metaheuristic based on the R2 indicator for many-objective optimization, in Proc. Annu. Conf. Genet. Evol. Comput., Madrid, Spain, 2015, pp. 679–686. R. H. Gómez, C. A. Coello Coello, Improved metaheuristic based on the R2 indicator for many-objective optimization, in Proc. Annu. Conf. Genet. Evol. Comput., Madrid, Spain, 2015, pp. 679–686.
26.
go back to reference Li F, Cheng R, Liu J, Jin Y (2018) A two-stage R2 indicator based evolutionary algorithm for many-objective optimization. Appl Soft Comput 67:245–260 Li F, Cheng R, Liu J, Jin Y (2018) A two-stage R2 indicator based evolutionary algorithm for many-objective optimization. Appl Soft Comput 67:245–260
27.
go back to reference Li M, Yang S, Liu X (2016) Pareto or non-Pareto: Bi-criterion evolution in multiobjective optimization. IEEE Trans Evol Comput 20(5):645–665 Li M, Yang S, Liu X (2016) Pareto or non-Pareto: Bi-criterion evolution in multiobjective optimization. IEEE Trans Evol Comput 20(5):645–665
28.
go back to reference Zhou J, Yao X, Gao L, Hu C (2021) An indicator and adaptive region division based evolutionary algorithm for many-objective optimization. Appl Soft Comput 99:106872 Zhou J, Yao X, Gao L, Hu C (2021) An indicator and adaptive region division based evolutionary algorithm for many-objective optimization. Appl Soft Comput 99:106872
29.
go back to reference Y. Tian, X. Zhang, R. Cheng, Y. Jin, A multi-objective evolutionary algorithm based on an enhanced inverted generational distance metric, in Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2016. Y. Tian, X. Zhang, R. Cheng, Y. Jin, A multi-objective evolutionary algorithm based on an enhanced inverted generational distance metric, in Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2016.
30.
go back to reference Tian Y, Cheng R, Zhang X, Cheng F, Jin Y (2018) An indicator based multi-objective evolutionary algorithm with reference point adaptation for better versatility. IEEE Trans Evol Comput 22(4):609–622 Tian Y, Cheng R, Zhang X, Cheng F, Jin Y (2018) An indicator based multi-objective evolutionary algorithm with reference point adaptation for better versatility. IEEE Trans Evol Comput 22(4):609–622
31.
go back to reference Pamulapati T, Mallipeddi R, Suganthan PN (2019) ISDE+—An indicator for multi and many-objective optimization. IEEE Trans Evol Comput 23(2):346–352 Pamulapati T, Mallipeddi R, Suganthan PN (2019) ISDE+—An indicator for multi and many-objective optimization. IEEE Trans Evol Comput 23(2):346–352
32.
go back to reference Li M, Yao X (2020) Quality evaluation of solution sets in multiobjective optimisation: a survey. ACM Comput Surv 52(2):1–38 Li M, Yao X (2020) Quality evaluation of solution sets in multiobjective optimisation: a survey. ACM Comput Surv 52(2):1–38
33.
go back to reference J. G. Falcón-Cardona, C. A. Coello Coello, Indicator-based multi-objective evolutionary algorithms: A comprehensive survey, ACM Comput. Surv. 53 (2) (2020) 1–35. J. G. Falcón-Cardona, C. A. Coello Coello, Indicator-based multi-objective evolutionary algorithms: A comprehensive survey, ACM Comput. Surv. 53 (2) (2020) 1–35.
34.
go back to reference Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731 Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731
35.
go back to reference Liu H, Gu F, Zhang Q (2014) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455 Liu H, Gu F, Zhang Q (2014) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455
36.
go back to reference Liu H, Chen L, Zhang Q, Deb K (2018) Adaptively allocating search effort in challenging many-objective optimization problems. IEEE Trans Evol Comput 22(3):433–448 Liu H, Chen L, Zhang Q, Deb K (2018) Adaptively allocating search effort in challenging many-objective optimization problems. IEEE Trans Evol Comput 22(3):433–448
37.
go back to reference . Qi, X. Ma, L. Fang, L. Jiao, J. Sun, J, Wu, MOEA/D with adaptive weight adjustment, Evol. Comput. 22 (2) (2014) 231-264 . Qi, X. Ma, L. Fang, L. Jiao, J. Sun, J, Wu, MOEA/D with adaptive weight adjustment, Evol. Comput. 22 (2) (2014) 231-264
38.
go back to reference Li M, Yao X (2020) What weights work for you? Adapting weights for any Pareto front shape in decomposition-based evolutionary multiobjective optimization. Evol Comput 28(2):227–253PubMed Li M, Yao X (2020) What weights work for you? Adapting weights for any Pareto front shape in decomposition-based evolutionary multiobjective optimization. Evol Comput 28(2):227–253PubMed
39.
go back to reference Das I, Dennis JE (1998) Normal-boundary intersection: a new method for generating the pareto surface in nonlinear multicriteria optimization problems. SIAM J Optim 8(3):631–657MathSciNet Das I, Dennis JE (1998) Normal-boundary intersection: a new method for generating the pareto surface in nonlinear multicriteria optimization problems. SIAM J Optim 8(3):631–657MathSciNet
40.
go back to reference Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601 Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601
41.
go back to reference Jiang S, Yang S (2017) A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization. IEEE Trans Evol Comput 21(3):329–346 Jiang S, Yang S (2017) A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization. IEEE Trans Evol Comput 21(3):329–346
42.
go back to reference Jaszkiewicz A (2002) On the performance of multiple-objective genetic local search on the 0/1 knapsack problem—a comparative experiment. IEEE Trans Evol Comput 6(4):402–412 Jaszkiewicz A (2002) On the performance of multiple-objective genetic local search on the 0/1 knapsack problem—a comparative experiment. IEEE Trans Evol Comput 6(4):402–412
43.
go back to reference Tan Y, Jiao Y, Li H, Wang X (2013) MOEA/D + uniform design: a new version of MOEA/D for optimization problems with many objectives. Comput Oper Res 40(6):1648–1660MathSciNet Tan Y, Jiao Y, Li H, Wang X (2013) MOEA/D + uniform design: a new version of MOEA/D for optimization problems with many objectives. Comput Oper Res 40(6):1648–1660MathSciNet
44.
go back to reference Wang Z, Zhang Q, Li H, Ishibuchi H, Jiao L (2017) On the use of two reference points in decomposition based multiobjective evolutionary algorithms, Swarm. Evol Comput 34:89–102 Wang Z, Zhang Q, Li H, Ishibuchi H, Jiao L (2017) On the use of two reference points in decomposition based multiobjective evolutionary algorithms, Swarm. Evol Comput 34:89–102
45.
go back to reference Ishibuchi H, Setoguchi Y, Masuda H, Nojima Y (2017) Performance of decomposition-based many-objective algorithms strongly depends on pareto front shapes. IEEE Trans Evol Comput 21(2):169–190 Ishibuchi H, Setoguchi Y, Masuda H, Nojima Y (2017) Performance of decomposition-based many-objective algorithms strongly depends on pareto front shapes. IEEE Trans Evol Comput 21(2):169–190
46.
go back to reference Ma X, Yu Y, Li X, Qi Y, Zhu Z (2020) A survey of weight vector adjustment methods for decomposition-based multiobjective evolutionary algorithms. IEEE Trans Evol Comput 24(4):634–649 Ma X, Yu Y, Li X, Qi Y, Zhu Z (2020) A survey of weight vector adjustment methods for decomposition-based multiobjective evolutionary algorithms. IEEE Trans Evol Comput 24(4):634–649
47.
go back to reference Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach. IEEE Trans Evol Comput 18(4):602–622 Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach. IEEE Trans Evol Comput 18(4):602–622
48.
go back to reference Q. Liu, Y. Jin, M. Heiderich, T. Rodemann, Adaptation of reference vectors for evolutionary many-objective optimization of problems with irregular pareto fronts, in Proc. IEEE Cong. Evol. Comput., Wellington, New Zealand, 2019, pp. 1726–1733. Q. Liu, Y. Jin, M. Heiderich, T. Rodemann, Adaptation of reference vectors for evolutionary many-objective optimization of problems with irregular pareto fronts, in Proc. IEEE Cong. Evol. Comput., Wellington, New Zealand, 2019, pp. 1726–1733.
49.
go back to reference Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20(5):773–791 Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20(5):773–791
50.
go back to reference Asafuddoula M, Singh HK, Ray T (2017) An enhanced decomposition-based evolutionary algorithm with adaptive reference vectors. IEEE Trans Cybern 48(8):2321–2334PubMed Asafuddoula M, Singh HK, Ray T (2017) An enhanced decomposition-based evolutionary algorithm with adaptive reference vectors. IEEE Trans Cybern 48(8):2321–2334PubMed
51.
go back to reference Wu M, Li K, Kwong S, Zhang Q, Zhang J (2018) Learning to decompose: a paradigm for decomposition-based multiobjective optimization. IEEE Trans Evol Comput 23(3):376–390ADS Wu M, Li K, Kwong S, Zhang Q, Zhang J (2018) Learning to decompose: a paradigm for decomposition-based multiobjective optimization. IEEE Trans Evol Comput 23(3):376–390ADS
52.
go back to reference Luque M, Gallardo SG, Saborido R, Ruiz AB (2020) Adaptive global wasf-ga to handle many-objective optimization problems. Swarm Evol Comput 54:100644 Luque M, Gallardo SG, Saborido R, Ruiz AB (2020) Adaptive global wasf-ga to handle many-objective optimization problems. Swarm Evol Comput 54:100644
53.
go back to reference Gu F, Cheung YM (2017) Self-organizing map-based weight design for decomposition-based many-objective evolutionary algorithm. IEEE Trans Evol Comput 22(2):211–225 Gu F, Cheung YM (2017) Self-organizing map-based weight design for decomposition-based many-objective evolutionary algorithm. IEEE Trans Evol Comput 22(2):211–225
54.
go back to reference Liu Y, Ishibuchi H, Masuyama N, Nojima Y (2020) Adapting reference vectors and scalarizing functions by growing neural gas to handle irregular pareto fronts. IEEE Trans Evol Comput 24(3):439–453 Liu Y, Ishibuchi H, Masuyama N, Nojima Y (2020) Adapting reference vectors and scalarizing functions by growing neural gas to handle irregular pareto fronts. IEEE Trans Evol Comput 24(3):439–453
55.
go back to reference Liu Q, Jin Y, Heiderich M, Rodemann T, Yu G (2022) An adaptive reference vector-guided evolutionary algorithm using growing neural gas for many-objective optimization of irregular problems. IEEE Trans Cybern 52(5):2698–2711PubMed Liu Q, Jin Y, Heiderich M, Rodemann T, Yu G (2022) An adaptive reference vector-guided evolutionary algorithm using growing neural gas for many-objective optimization of irregular problems. IEEE Trans Cybern 52(5):2698–2711PubMed
56.
go back to reference R. Denysiuk, L. Costa, I. E. Santo, Clustering-based selection for evolutionary many-objective optimization, in Proc. Int. Conf. Parallel Probl. Solving Nat., 2014, pp. 538–547. R. Denysiuk, L. Costa, I. E. Santo, Clustering-based selection for evolutionary many-objective optimization, in Proc. Int. Conf. Parallel Probl. Solving Nat., 2014, pp. 538–547.
57.
go back to reference Hua Y, Jin Y, Hao K (2019) A clustering-based adaptive evolutionary algorithm for multiobjective optimization with irregular Pareto fronts. IEEE Trans Cybern 49(7):2758–2770PubMed Hua Y, Jin Y, Hao K (2019) A clustering-based adaptive evolutionary algorithm for multiobjective optimization with irregular Pareto fronts. IEEE Trans Cybern 49(7):2758–2770PubMed
58.
go back to reference Tian Y, Cheng R, Zhang X, Jin Y (2017) PlatEMO: A MATLAB plat-form for evolutionary multi-objective optimization. IEEE Comput Intell Mag 12(4):73–87 Tian Y, Cheng R, Zhang X, Jin Y (2017) PlatEMO: A MATLAB plat-form for evolutionary multi-objective optimization. IEEE Comput Intell Mag 12(4):73–87
59.
go back to reference Xiang Y, Zhou Y, Yang X, Huang H (2020) A many-objective evolutionary algorithm with pareto-adaptive reference points. IEEE Trans Evol Comput 24(1):99–113 Xiang Y, Zhou Y, Yang X, Huang H (2020) A many-objective evolutionary algorithm with pareto-adaptive reference points. IEEE Trans Evol Comput 24(1):99–113
60.
go back to reference Xu H, Zeng W, Zeng X, Yen GG (2021) A polar-metric-based evolutionary algorithm. IEEE Trans Cybern 51(7):3429–3440PubMed Xu H, Zeng W, Zeng X, Yen GG (2021) A polar-metric-based evolutionary algorithm. IEEE Trans Cybern 51(7):3429–3440PubMed
61.
go back to reference Liu Y, Gong D, Sun X, Zhang Y (2017) Many-objective evolutionary optimization based on reference points. Appl Soft Comput 50:344–355 Liu Y, Gong D, Sun X, Zhang Y (2017) Many-objective evolutionary optimization based on reference points. Appl Soft Comput 50:344–355
62.
go back to reference Sun Y, Xue B, Zhang M, Yen GG (2019) A new two-stage evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 23(5):748–761 Sun Y, Xue B, Zhang M, Yen GG (2019) A new two-stage evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 23(5):748–761
Metadata
Title
A many-objective evolutionary algorithm with metric-based reference vector adjustment
Authors
Xujian Wang
Fenggan Zhang
Minli Yao
Publication date
18-07-2023
Publisher
Springer International Publishing
Published in
Complex & Intelligent Systems / Issue 1/2024
Print ISSN: 2199-4536
Electronic ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-023-01161-w

Other articles of this Issue 1/2024

Complex & Intelligent Systems 1/2024 Go to the issue

Premium Partner