GP-DEMO: Differential Evolution for Multiobjective Optimization based on Gaussian Process models

https://doi.org/10.1016/j.ejor.2014.04.011Get rights and content

Highlights

  • New surrogate-model-based evolutionary multiobjective algorithm GP-DEMO is proposed.

  • New relations for comparing solutions under uncertainty are defined.

  • Comparing solutions under uncertainty prevents wrongly performed comparisons.

  • GP-DEMO in comparison to DEMO obtained similar results with less exact evaluations.

Abstract

This paper proposes a novel surrogate-model-based multiobjective evolutionary algorithm called Differential Evolution for Multiobjective Optimization based on Gaussian Process models (GP-DEMO). The algorithm is based on the newly defined relations for comparing solutions under uncertainty. These relations minimize the possibility of wrongly performed comparisons of solutions due to inaccurate surrogate model approximations. The GP-DEMO algorithm was tested on several benchmark problems and two computationally expensive real-world problems. To be able to assess the results we compared them with another surrogate-model-based algorithm called Generational Evolution Control (GEC) and with the Differential Evolution for Multiobjective Optimization (DEMO). The quality of the results obtained with GP-DEMO was similar to the results obtained with DEMO, but with significantly fewer exactly evaluated solutions during the optimization process. The quality of the results obtained with GEC was lower compared to the quality gained with GP-DEMO and DEMO, mainly due to wrongly performed comparisons of the inaccurately approximated solutions.

Introduction

Optimization problems are present in our everyday life and come in a variety of forms, e.g. the task to optimize certain properties of a system by correctly choosing the system parameters. Many of these optimization problems require the simultaneous optimization of multiple, often conflicting, criteria (or objectives). These problems are called multiobjective optimization problems. The solution to such problems is not a single point, but a family of points, known as the Pareto-optimal set. This set of solutions gives the decision maker an insight into the characteristics of the problem before a single solution is chosen.

One of the most effective ways to solve problems with more objectives is to use multiobjective evolutionary algorithms (MOEAs). MOEAs are population-based algorithms that draw inspiration from optimization processes that occur in nature. During the optimization process, in order to find a Pareto-optimal set, a lot of different solutions have to be assessed (evaluated). These solution evaluations can be costly, dangerous or computationally expensive. In such cases the goal is to minimize the number of exactly evaluated solutions, but still find the best solutions. In this paper we focus on computationally expensive problems, where one solution evaluation takes a lot of time.

In order to obtain the results of such an optimization problem more quickly (or even be able to obtain them in reasonable amount of time), we can use surrogate models in the optimization process to approximate the objective functions of the problem. To evaluate a solution, instead of using a time-consuming exact evaluation, a solution can be approximated with the surrogate model. Since one solution approximation is (much) faster, the whole optimization process can be accelerated. However, note that the time needed to create and update the surrogate models during the optimization process has to be considered and included in the whole duration of the optimization process. So, in the case where the exact solution evaluations are quick, it can happen that the surrogate-model-based optimization takes longer than the optimization without surrogates.

In surrogate-model-based multiobjective optimization, approximated values are often inappropriately used in the solution comparison. As a consequence, exactly evaluated good solutions can be discarded from the population because they appear to be dominated by the inaccurate and over-optimistic approximations. This can slow the optimization process or even prevent the algorithm from finding the best solutions.

Some surrogate models provide a distribution, from which the approximated value and also the confidence interval of the approximation can be calculated. Using this confidence interval, we define new dominance relations that take into account this uncertainty and propose a new concept for comparing solutions under uncertainty that requires exact evaluations only in cases where more certainty is needed. This minimizes the mistakes made in comparisons of inaccurately approximated solutions.

Based on this concept we propose a new surrogate-model-based multiobjective evolutionary algorithm, called Differential Evolution for Multiobjective Optimization based on Gaussian Process modeling (GP-DEMO). This algorithm is an extension of the Differential Evolution for Multiobjective Optimization (DEMO) algorithm (Robič & Filipič, 2005), which uses differential evolution to effectively solve numerical multiobjective optimization problems and, in addition, emphasizes the variation operators. In GP-DEMO, Gaussian Process (GP) modeling is employed to find approximate solution values together with their confidence intervals. Then, instead of comparing the solutions using the Pareto dominance relation, GP-DEMO uses the new uncertainty-based dominance relations, requiring exact evaluations of solutions as needed. The efficiency of GP-DEMO is assessed on several benchmark and two real-world optimization problems.

The structure of this paper is as follows. In Section 2, we overview the work done in the field of surrogate-model-based optimization, especially in multiobjective optimization. In Section 3, we describe the Gaussian Process modeling that is used to build the surrogate models in GP-DEMO. Then, in Section 4, we describe the new relations and methods for comparing solutions (presented with and without uncertainty). The GP-DEMO algorithm is presented in Section 5. In Section 6, we test and compare GP-DEMO with two other algorithms on benchmark and real-world multiobjective optimization problems. Finally, Section 7 concludes the paper with a summary of the work done and our ideas for future work.

Section snippets

Related work

In the literature the term surrogate model (sometimes also meta-model) based optimization is used where, during the optimization processes, some solutions are not evaluated with the original objective function, but are approximated using a model of this function. Different modeling methods are used to build the surrogate models. For single and multiobjective optimization similar methods are used. These methods typically return only one approximated value, which is why in multiobjective problems

Gaussian Process models

The Gaussian Process (GP) models are probabilistic, nonparametric, models based on the principles of Bayesian probability, which can be used for both regression and classification problems. The name GP models refers to the assumption that a prior on the function to be modeled is a stochastic process with a normal distribution, i.e., a Gaussian Process (GP).

GPs were first used in 1940s for time series prediction (Wiener, 1949, Kolmogorov, 1941). In 1970s, GPs have been widely used in the field

Relations in multiobjective optimization

A multiobjective optimization problem (MOP) consists of finding the minimum of the function:f:XZ,f:(x1,,xn)0.25em0ex0.25em0ex(f1(x1,,xn),,fm(x1,,xn)),where n is the number of variables and m is the number of objectives, and where each solution x = (x1, …, xn) ∈ X is called a decision vector, while the corresponding element z = f(x) ∈ Z is an objective vector. We use this problem formulation to describe the relations between the solutions presented without and with uncertainty.

The GP-DEMO algorithm

The GP-DEMO algorithm for surrogate-model-based optimization is, as the name suggests, built upon the DEMO algorithm (Robič & Filipič, 2005). DEMO is an multiobjective evolutionary algorithm based on Differential Evolution (DE) (Price & Storn, 1997). Like DE, DEMO is easy to understand and implement, and very effective on numerical problems. The main disadvantage of this algorithm is that it is not suited for solving combinatorial problems because the candidate creation uses vector addition and

Numerical evaluation

This section describes the test problems that were used in this study, presents the settings used for testing, shows the results of the optimization, and compares the results gained with different algorithms. The section concludes with the discussion and explanation of the results.

Conclusion

In this paper, the newly developed surrogate-model-based multiobjective evolutionary algorithm GP-DEMO is presented. GP-DEMO is based on the algorithm DEMO using the GP model as a surrogate model for approximating solutions. To prevent wrongly performed comparisons of the solutions due to inaccurate approximated solutions, new relations for comparing solutions under uncertainty is suggested. These relations, also considering the confidence intervals of the approximation, are used for every

Acknowledgments

The work presented in this paper was carried out under young researcher grant M090248, research programme P2-0209, and research projects L2-3651 and J2-4120, all funded by the Slovenian Research Agency. The authors are grateful to Dr. Robert Vertnik for providing a numerical simulator of the continuous casting process.

References (61)

  • M. Depolli et al.

    Computer-simulated alternative modes of U-wave genesis

    Journal of Cardiovascular Electrophysiology

    (2008)
  • Emmerich, M. (2005). Single-and multi-objective evolutionary design optimization assisted by Gaussian random field...
  • Emmerich, M. T., Deutz, A., & Klinkenberg, J. (2011). Hypervolume-based expected improvement: Monotonicity properties...
  • M. Emmerich et al.

    Single- and multiobjective evolutionary optimization assisted by Gaussian random field metamodels

    IEEE Transactions on Evolutionary Computation

    (2006)
  • M. Emmerich et al.

    Metamodel-assisted evolution strategies

  • Emmerich, M., & Naujoks, B. (2004a). Metamodel-assisted multiobjective optimization with implicit constraints and its...
  • M. Emmerich et al.

    Metamodel assisted multiobjective optimisation strategies and their application in airfoil design

  • B. Grašič et al.

    Ozone prediction based on neural networks and Gaussian processes

    Nuovo Cimento C

    (2006)
  • D.E. Grierson et al.

    Optimal sizing, geometrical and topological design using a genetic algorithm

    Structural Optimization

    (1993)
  • Gunter, R. (2001). A partial order approach to noisy fitness functions. In 2001 IEEE Congress on Evolutionary...
  • R.L. Hardy

    Multiquadric equations of topography and other irregular surfaces

    Journal of Geophysical Research

    (1971)
  • S. Huband et al.

    A scalable multi-objective test problem toolkit

  • Y. Jin

    A comprehensive survey of fitness approximation in evolutionary computation

    Soft Computing

    (2003)
  • Jin, Y., Olhofer, M., & Sendhoff, B. (2001). Managing approximate models in evolutionary aerodynamic design...
  • D.R. Jones et al.

    Efficient global optimization of expensive black-box functions

    Journal of Global Optimization

    (1998)
  • J.P. Kleijnen et al.

    Application-driven sequential designs for simulation experiments: Kriging metamodelling

    Journal of the Operational Research Society

    (2004)
  • J. Knowles

    ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems

    IEEE Transactions on Evolutionary Computation

    (2006)
  • A.N. Kolmogorov

    Interpolation und extrapolation von stationären zufälligen folgen

    Bulletin Academy Science USSR Series in Mathematics

    (1941)
  • D.G. Krige

    A statistical approach to some basic mine valuation problems on the Witwatersrand

    Journal of the Chemical, Metallurgical and Mining Society of South Africa

    (1951)
  • G. Li et al.

    Improving multi-objective genetic algorithms with adaptive design of experiments and online metamodeling

    Structural and Multidisciplinary Optimization

    (2009)
  • Cited by (58)

    • Methods for constrained optimization of expensive mixed-integer multi-objective problems, with application to an internal combustion engine design problem

      2023, European Journal of Operational Research
      Citation Excerpt :

      It is not clear how the discrete nature of the optimization problem is handled in this approach. Mlakar, Petelin, Tušar, & Filipič (2015) have proposed a multi-objective optimization algorithm based on the concept of differential evolution known as GP-DEMO. This algorithm relies on a sparse–approximation method which is more computationally efficient than having to compute the inverse of the entire covariance matrix during the process of learning the surrogate model.

    • Success history based adaptive multi-objective differential evolution variants with an interval scheme for solving simultaneous topology, shape and sizing truss reliability optimisation

      2022, Knowledge-Based Systems
      Citation Excerpt :

      Well-known algorithms for single objective optimisation problems are Genetic Algorithms (GA) [22], Differential Evolution (DE) [23], and Probability-Based Incremental Learning (PBIL) [24], Particle Swarm Optimisation (PSO) [25], Ant Colony Optimisation (ACO) [26], Artificial Bee Colony (ABC) [27], Grey Wolf Optimizer (GWO) [28], Whale Optimisation Algorithm (WOA) [28,29] and Harris’ Hawk Optimizer (HHO) [30]. For multi-objective optimisation, well-known algorithms include Non-dominated Sorting Genetic Algorithm II (NSGA-II) [31], Pareto Archived Evolution Strategy (PAES) [32], Differential Evolution for Multi-objective Optimisation (DEMO) [33], Multi-Objective Evolutionary Algorithm based Decomposition (MOEA/D) [34], Pareto Envelope based Selection-II (PESA-II) [35], Strength Pareto Evolutionary Algorithm 2 (SPEA2) [36], Unrestricted Population-Size Evolutionary Multi-objective Optimisation algorithm (UPS-EMOA) [37], Differential Evolution for Multi-objective Optimisation based on Gaussian Process models (GP-DEMO) [38], multi-objective multi-verse optimisation algorithm (MOMVO) [39], multi-objective grasshopper optimisation algorithm (MOGOA) [40], multi-objective dragonfly optimisation (MODA) [41], multi-objective salp swarm algorithm (MSSA) [42], hybrid real-code population-based incremental learning and differential evolution algorithm (RPBILDE) [43,44], multi-objective meta-heuristic with iterative parameter distribution estimation (MMIPDE) [45], Success History-based Adaptive Multi-Objective Differential Evolution (SHAMODE) and its hybridised version with the Whale Optimisation Algorithm (SHAMODE-WO) [6]. There are a great number of investigations on improving the efficiency of metaheuristics by adaptive strategies, external archives, and hybridisation between algorithms [6,46–71].

    • A gradient boosting decision tree approach for insider trading identification: An empirical model evaluation of China stock market

      2019, Applied Soft Computing Journal
      Citation Excerpt :

      In the last few decades, many researchers have shown preference to DE because it emerges as a simple and very effective optimizer. Successful applications of DE are found in lots of literatures [24–28]. The main steps involved in DE are as follows:

    • Pre-DEMO: Preference-Inspired Differential Evolution for Multi/Many-Objective Optimization

      2023, IEEE Transactions on Systems, Man, and Cybernetics: Systems
    View all citing articles on Scopus
    View full text